00:00:00.001 Started by upstream project "autotest-per-patch" build number 124203 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.086 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.112 Using shallow fetch with depth 1 00:00:00.112 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.112 > git --version # timeout=10 00:00:00.142 > git --version # 'git version 2.39.2' 00:00:00.142 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.170 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.454 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.466 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.479 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.479 > git config core.sparsecheckout # timeout=10 00:00:06.491 > git read-tree -mu HEAD # timeout=10 00:00:06.506 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.522 Commit message: "pool: fixes for VisualBuild class" 00:00:06.522 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.654 [Pipeline] Start of Pipeline 00:00:06.666 [Pipeline] library 00:00:06.667 Loading library shm_lib@master 00:00:06.667 Library shm_lib@master is cached. Copying from home. 00:00:06.680 [Pipeline] node 00:00:06.689 Running on VM-host-SM4 in /var/jenkins/workspace/freebsd-vg-autotest_2 00:00:06.691 [Pipeline] { 00:00:06.701 [Pipeline] catchError 00:00:06.703 [Pipeline] { 00:00:06.713 [Pipeline] wrap 00:00:06.720 [Pipeline] { 00:00:06.727 [Pipeline] stage 00:00:06.728 [Pipeline] { (Prologue) 00:00:06.741 [Pipeline] echo 00:00:06.742 Node: VM-host-SM4 00:00:06.746 [Pipeline] cleanWs 00:00:06.754 [WS-CLEANUP] Deleting project workspace... 00:00:06.754 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.759 [WS-CLEANUP] done 00:00:07.018 [Pipeline] setCustomBuildProperty 00:00:07.125 [Pipeline] nodesByLabel 00:00:07.126 Found a total of 2 nodes with the 'sorcerer' label 00:00:07.134 [Pipeline] httpRequest 00:00:07.138 HttpMethod: GET 00:00:07.138 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.139 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:07.152 Response Code: HTTP/1.1 200 OK 00:00:07.152 Success: Status code 200 is in the accepted range: 200,404 00:00:07.153 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.681 [Pipeline] sh 00:00:10.963 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.980 [Pipeline] httpRequest 00:00:10.986 HttpMethod: GET 00:00:10.986 URL: http://10.211.164.101/packages/spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:00:10.987 Sending request to url: http://10.211.164.101/packages/spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:00:11.009 Response Code: HTTP/1.1 200 OK 00:00:11.010 Success: Status code 200 is in the accepted range: 200,404 00:00:11.011 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest_2/spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:01:05.648 [Pipeline] sh 00:01:05.925 + tar --no-same-owner -xf spdk_c5e2a446defa06b8b8d4b09bf06ef38ceeaa3386.tar.gz 00:01:09.222 [Pipeline] sh 00:01:09.505 + git -C spdk log --oneline -n5 00:01:09.505 c5e2a446d autorun_post: Check if skipped tests were executed in per-patch 00:01:09.505 8b38652da test/fuzz: Rename llvm fuzzing tests 00:01:09.505 e55c9a812 vbdev_error: decrement error_num atomically 00:01:09.505 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:01:09.505 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:01:09.547 [Pipeline] writeFile 00:01:09.555 [Pipeline] sh 00:01:09.827 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:09.837 [Pipeline] sh 00:01:10.112 + cat autorun-spdk.conf 00:01:10.112 SPDK_TEST_UNITTEST=1 00:01:10.112 SPDK_RUN_VALGRIND=0 00:01:10.112 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.112 SPDK_TEST_NVME=1 00:01:10.112 SPDK_TEST_BLOCKDEV=1 00:01:10.112 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.118 RUN_NIGHTLY=0 00:01:10.120 [Pipeline] } 00:01:10.137 [Pipeline] // stage 00:01:10.152 [Pipeline] stage 00:01:10.154 [Pipeline] { (Run VM) 00:01:10.169 [Pipeline] sh 00:01:10.448 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.448 + echo 'Start stage prepare_nvme.sh' 00:01:10.448 Start stage prepare_nvme.sh 00:01:10.448 + [[ -n 8 ]] 00:01:10.448 + disk_prefix=ex8 00:01:10.448 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest_2 ]] 00:01:10.448 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf ]] 00:01:10.448 + source /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf 00:01:10.448 ++ SPDK_TEST_UNITTEST=1 00:01:10.448 ++ SPDK_RUN_VALGRIND=0 00:01:10.448 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.448 ++ SPDK_TEST_NVME=1 00:01:10.448 ++ SPDK_TEST_BLOCKDEV=1 00:01:10.448 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.448 ++ RUN_NIGHTLY=0 00:01:10.448 + cd /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:10.448 + nvme_files=() 00:01:10.448 + declare -A nvme_files 00:01:10.448 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.448 + nvme_files['nvme.img']=5G 00:01:10.448 + nvme_files['nvme-cmb.img']=5G 00:01:10.448 + nvme_files['nvme-multi0.img']=4G 00:01:10.448 + nvme_files['nvme-multi1.img']=4G 00:01:10.448 + nvme_files['nvme-multi2.img']=4G 00:01:10.448 + nvme_files['nvme-openstack.img']=8G 00:01:10.448 + nvme_files['nvme-zns.img']=5G 00:01:10.448 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.448 + (( SPDK_TEST_FTL == 1 )) 00:01:10.448 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.448 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.448 + for nvme in "${!nvme_files[@]}" 00:01:10.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:10.448 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.448 + for nvme in "${!nvme_files[@]}" 00:01:10.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:10.448 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.448 + for nvme in "${!nvme_files[@]}" 00:01:10.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:10.448 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:10.448 + for nvme in "${!nvme_files[@]}" 00:01:10.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:10.448 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.448 + for nvme in "${!nvme_files[@]}" 00:01:10.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:10.448 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.448 + for nvme in "${!nvme_files[@]}" 00:01:10.448 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:10.706 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.706 + for nvme in "${!nvme_files[@]}" 00:01:10.706 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:10.706 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:10.706 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:10.706 + echo 'End stage prepare_nvme.sh' 00:01:10.706 End stage prepare_nvme.sh 00:01:10.717 [Pipeline] sh 00:01:10.995 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:10.996 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -H -a -v -f freebsd13 00:01:10.996 00:01:10.996 DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant 00:01:10.996 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk 00:01:10.996 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest_2 00:01:10.996 HELP=0 00:01:10.996 DRY_RUN=0 00:01:10.996 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img, 00:01:10.996 NVME_DISKS_TYPE=nvme, 00:01:10.996 NVME_AUTO_CREATE=0 00:01:10.996 NVME_DISKS_NAMESPACES=, 00:01:10.996 NVME_CMB=, 00:01:10.996 NVME_PMR=, 00:01:10.996 NVME_ZNS=, 00:01:10.996 NVME_MS=, 00:01:10.996 NVME_FDP=, 00:01:10.996 SPDK_VAGRANT_DISTRO=freebsd13 00:01:10.996 SPDK_VAGRANT_VMCPU=10 00:01:10.996 SPDK_VAGRANT_VMRAM=12288 00:01:10.996 SPDK_VAGRANT_PROVIDER=libvirt 00:01:10.996 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:10.996 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:10.996 SPDK_OPENSTACK_NETWORK=0 00:01:10.996 VAGRANT_PACKAGE_BOX=0 00:01:10.996 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:10.996 FORCE_DISTRO=true 00:01:10.996 VAGRANT_BOX_VERSION= 00:01:10.996 EXTRA_VAGRANTFILES= 00:01:10.996 NIC_MODEL=e1000 00:01:10.996 00:01:10.996 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt' 00:01:10.996 /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest_2 00:01:14.315 Bringing machine 'default' up with 'libvirt' provider... 00:01:14.641 ==> default: Creating image (snapshot of base box volume). 00:01:14.641 ==> default: Creating domain with the following settings... 00:01:14.641 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1718013919_f9dd3c8bd8d0df00f065 00:01:14.641 ==> default: -- Domain type: kvm 00:01:14.641 ==> default: -- Cpus: 10 00:01:14.641 ==> default: -- Feature: acpi 00:01:14.641 ==> default: -- Feature: apic 00:01:14.641 ==> default: -- Feature: pae 00:01:14.641 ==> default: -- Memory: 12288M 00:01:14.641 ==> default: -- Memory Backing: hugepages: 00:01:14.641 ==> default: -- Management MAC: 00:01:14.641 ==> default: -- Loader: 00:01:14.641 ==> default: -- Nvram: 00:01:14.641 ==> default: -- Base box: spdk/freebsd13 00:01:14.641 ==> default: -- Storage pool: default 00:01:14.641 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1718013919_f9dd3c8bd8d0df00f065.img (32G) 00:01:14.641 ==> default: -- Volume Cache: default 00:01:14.641 ==> default: -- Kernel: 00:01:14.641 ==> default: -- Initrd: 00:01:14.641 ==> default: -- Graphics Type: vnc 00:01:14.641 ==> default: -- Graphics Port: -1 00:01:14.641 ==> default: -- Graphics IP: 127.0.0.1 00:01:14.641 ==> default: -- Graphics Password: Not defined 00:01:14.641 ==> default: -- Video Type: cirrus 00:01:14.641 ==> default: -- Video VRAM: 9216 00:01:14.641 ==> default: -- Sound Type: 00:01:14.641 ==> default: -- Keymap: en-us 00:01:14.641 ==> default: -- TPM Path: 00:01:14.641 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:14.641 ==> default: -- Command line args: 00:01:14.641 ==> default: -> value=-device, 00:01:14.641 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:14.641 ==> default: -> value=-drive, 00:01:14.641 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:01:14.641 ==> default: -> value=-device, 00:01:14.641 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:14.900 ==> default: Creating shared folders metadata... 00:01:14.900 ==> default: Starting domain. 00:01:16.277 ==> default: Waiting for domain to get an IP address... 00:01:38.197 ==> default: Waiting for SSH to become available... 00:01:53.076 ==> default: Configuring and enabling network interfaces... 00:01:54.976 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:07.226 ==> default: Mounting SSHFS shared folder... 00:02:07.226 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:02:07.226 ==> default: Checking Mount.. 00:02:07.226 ==> default: Folder Successfully Mounted! 00:02:07.226 ==> default: Running provisioner: file... 00:02:07.226 default: ~/.gitconfig => .gitconfig 00:02:07.484 00:02:07.484 SUCCESS! 00:02:07.484 00:02:07.484 cd to /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt and type "vagrant ssh" to use. 00:02:07.484 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.484 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt" to destroy all trace of vm. 00:02:07.484 00:02:07.493 [Pipeline] } 00:02:07.513 [Pipeline] // stage 00:02:07.521 [Pipeline] dir 00:02:07.522 Running in /var/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt 00:02:07.523 [Pipeline] { 00:02:07.536 [Pipeline] catchError 00:02:07.538 [Pipeline] { 00:02:07.550 [Pipeline] sh 00:02:07.826 + vagrant ssh-config --host vagrant 00:02:07.826 + sed -ne /^Host/,$p 00:02:07.826 + tee ssh_conf 00:02:12.064 Host vagrant 00:02:12.064 HostName 192.168.121.164 00:02:12.064 User vagrant 00:02:12.064 Port 22 00:02:12.064 UserKnownHostsFile /dev/null 00:02:12.064 StrictHostKeyChecking no 00:02:12.064 PasswordAuthentication no 00:02:12.064 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:02:12.064 IdentitiesOnly yes 00:02:12.064 LogLevel FATAL 00:02:12.064 ForwardAgent yes 00:02:12.064 ForwardX11 yes 00:02:12.064 00:02:12.080 [Pipeline] withEnv 00:02:12.083 [Pipeline] { 00:02:12.100 [Pipeline] sh 00:02:12.383 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:12.383 source /etc/os-release 00:02:12.383 [[ -e /image.version ]] && img=$(< /image.version) 00:02:12.383 # Minimal, systemd-like check. 00:02:12.383 if [[ -e /.dockerenv ]]; then 00:02:12.383 # Clear garbage from the node's name: 00:02:12.383 # agt-er_autotest_547-896 -> autotest_547-896 00:02:12.383 # $HOSTNAME is the actual container id 00:02:12.383 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:12.383 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:12.383 # We can assume this is a mount from a host where container is running, 00:02:12.383 # so fetch its hostname to easily identify the target swarm worker. 00:02:12.383 container="$(< /etc/hostname) ($agent)" 00:02:12.383 else 00:02:12.383 # Fallback 00:02:12.383 container=$agent 00:02:12.383 fi 00:02:12.383 fi 00:02:12.383 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:12.383 00:02:12.394 [Pipeline] } 00:02:12.413 [Pipeline] // withEnv 00:02:12.423 [Pipeline] setCustomBuildProperty 00:02:12.439 [Pipeline] stage 00:02:12.443 [Pipeline] { (Tests) 00:02:12.463 [Pipeline] sh 00:02:12.754 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:12.770 [Pipeline] sh 00:02:13.049 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:13.064 [Pipeline] timeout 00:02:13.065 Timeout set to expire in 1 hr 30 min 00:02:13.067 [Pipeline] { 00:02:13.085 [Pipeline] sh 00:02:13.365 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:13.932 HEAD is now at c5e2a446d autorun_post: Check if skipped tests were executed in per-patch 00:02:13.946 [Pipeline] sh 00:02:14.227 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:14.243 [Pipeline] sh 00:02:14.525 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.544 [Pipeline] sh 00:02:14.826 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:14.826 ++ readlink -f spdk_repo 00:02:14.826 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:02:14.826 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:02:14.826 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:02:14.826 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:02:14.826 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:02:14.826 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:02:14.826 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:02:14.826 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:14.826 + cd /usr/home/vagrant/spdk_repo 00:02:14.826 + source /etc/os-release 00:02:14.826 ++ NAME=FreeBSD 00:02:14.826 ++ VERSION=13.2-RELEASE 00:02:14.826 ++ VERSION_ID=13.2 00:02:14.826 ++ ID=freebsd 00:02:14.826 ++ ANSI_COLOR='0;31' 00:02:14.826 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:02:14.826 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:02:14.826 ++ HOME_URL=https://FreeBSD.org/ 00:02:14.826 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:14.826 + uname -a 00:02:14.826 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:02:14.826 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:15.085 Contigmem (not present) 00:02:15.085 Buffer Size: not set 00:02:15.085 Num Buffers: not set 00:02:15.085 00:02:15.085 00:02:15.085 Type BDF Vendor Device Driver 00:02:15.085 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:02:15.085 + rm -f /tmp/spdk-ld-path 00:02:15.085 + source autorun-spdk.conf 00:02:15.085 ++ SPDK_TEST_UNITTEST=1 00:02:15.085 ++ SPDK_RUN_VALGRIND=0 00:02:15.085 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.085 ++ SPDK_TEST_NVME=1 00:02:15.085 ++ SPDK_TEST_BLOCKDEV=1 00:02:15.085 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.085 ++ RUN_NIGHTLY=0 00:02:15.086 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.086 + [[ -n '' ]] 00:02:15.086 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:02:15.086 + for M in /var/spdk/build-*-manifest.txt 00:02:15.086 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.086 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:02:15.086 + for M in /var/spdk/build-*-manifest.txt 00:02:15.086 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.086 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:02:15.086 ++ uname 00:02:15.086 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:15.086 + dmesg_pid=1272 00:02:15.086 + tail -F /var/log/messages 00:02:15.086 + [[ FreeBSD == FreeBSD ]] 00:02:15.086 + export LC_ALL=C LC_CTYPE=C 00:02:15.086 + LC_ALL=C 00:02:15.086 + LC_CTYPE=C 00:02:15.086 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.086 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.086 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.086 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.086 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.086 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.086 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.086 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:15.086 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:15.086 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:15.086 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.086 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.086 Test configuration: 00:02:15.086 SPDK_TEST_UNITTEST=1 00:02:15.086 SPDK_RUN_VALGRIND=0 00:02:15.086 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.086 SPDK_TEST_NVME=1 00:02:15.086 SPDK_TEST_BLOCKDEV=1 00:02:15.086 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.344 RUN_NIGHTLY=0 10:06:20 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:15.344 10:06:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.344 10:06:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.344 10:06:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.344 10:06:20 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:15.344 10:06:20 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:15.344 10:06:20 -- paths/export.sh@4 -- $ export PATH 00:02:15.344 10:06:20 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:15.344 10:06:20 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:02:15.344 10:06:20 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:15.344 10:06:20 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013980.XXXXXX 00:02:15.344 10:06:20 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013980.XXXXXX.m6FEALUR 00:02:15.344 10:06:20 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:15.344 10:06:20 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:15.344 10:06:20 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:02:15.344 10:06:20 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:15.344 10:06:20 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.344 10:06:20 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:15.344 10:06:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:15.344 10:06:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.344 10:06:20 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:15.344 10:06:20 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:15.344 10:06:20 -- pm/common@17 -- $ local monitor 00:02:15.344 10:06:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.344 10:06:20 -- pm/common@25 -- $ sleep 1 00:02:15.344 10:06:20 -- pm/common@21 -- $ date +%s 00:02:15.344 10:06:20 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718013980 00:02:15.344 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718013980_collect-vmstat.pm.log 00:02:16.717 10:06:21 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:16.717 10:06:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.717 10:06:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.717 10:06:21 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:02:16.717 10:06:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.717 Mon Jun 10 10:06:21 UTC 2024 00:02:16.717 10:06:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.717 v24.09-pre-55-gc5e2a446d 00:02:16.717 10:06:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.717 10:06:21 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:16.717 10:06:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:16.717 10:06:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:16.717 10:06:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:16.717 10:06:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:16.717 10:06:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:16.717 10:06:21 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:16.717 10:06:21 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:16.717 10:06:21 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:02:16.717 10:06:21 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:02:16.717 10:06:21 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:16.717 10:06:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.717 ************************************ 00:02:16.717 START TEST unittest_build 00:02:16.717 ************************************ 00:02:16.717 10:06:21 unittest_build -- common/autotest_common.sh@1124 -- $ _unittest_build 00:02:16.717 10:06:21 unittest_build -- common/autobuild_common.sh@404 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:17.649 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:17.650 are only supported on Linux. Turning off default feature. 00:02:17.650 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:17.650 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:18.216 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:18.216 Using 'verbs' RDMA provider 00:02:28.756 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:38.725 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:38.725 Creating mk/config.mk...done. 00:02:38.725 Creating mk/cc.flags.mk...done. 00:02:38.725 Type 'gmake' to build. 00:02:38.725 10:06:44 unittest_build -- common/autobuild_common.sh@405 -- $ gmake -j10 00:02:38.984 gmake[1]: Nothing to be done for 'all'. 00:02:42.267 ps: stdin: not a terminal 00:02:46.453 The Meson build system 00:02:46.453 Version: 1.3.1 00:02:46.453 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:02:46.453 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:46.453 Build type: native build 00:02:46.453 Program cat found: YES (/bin/cat) 00:02:46.453 Project name: DPDK 00:02:46.453 Project version: 24.03.0 00:02:46.453 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:02:46.453 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:02:46.453 Host machine cpu family: x86_64 00:02:46.453 Host machine cpu: x86_64 00:02:46.453 Message: ## Building in Developer Mode ## 00:02:46.453 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:46.453 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:46.453 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.453 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:46.453 Program cat found: YES (/bin/cat) 00:02:46.453 Compiler for C supports arguments -march=native: YES 00:02:46.453 Checking for size of "void *" : 8 00:02:46.453 Checking for size of "void *" : 8 (cached) 00:02:46.453 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:46.453 Library m found: YES 00:02:46.453 Library numa found: NO 00:02:46.453 Library fdt found: NO 00:02:46.453 Library execinfo found: YES 00:02:46.453 Has header "execinfo.h" : YES 00:02:46.453 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:02:46.453 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.453 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.453 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.453 Run-time dependency openssl found: YES 3.0.13 00:02:46.453 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:46.453 Library pcap found: YES 00:02:46.453 Has header "pcap.h" with dependency -lpcap: YES 00:02:46.453 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.453 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.453 Compiler for C supports arguments -Wformat: YES 00:02:46.453 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:46.453 Compiler for C supports arguments -Wformat-security: YES 00:02:46.453 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.453 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.453 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.453 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.453 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.453 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.453 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.453 Compiler for C supports arguments -Wundef: YES 00:02:46.453 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.453 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.453 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:46.453 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.453 Compiler for C supports arguments -mavx512f: YES 00:02:46.453 Checking if "AVX512 checking" compiles: YES 00:02:46.453 Fetching value of define "__SSE4_2__" : 1 00:02:46.453 Fetching value of define "__AES__" : 1 00:02:46.453 Fetching value of define "__AVX__" : 1 00:02:46.453 Fetching value of define "__AVX2__" : 1 00:02:46.453 Fetching value of define "__AVX512BW__" : 1 00:02:46.453 Fetching value of define "__AVX512CD__" : 1 00:02:46.453 Fetching value of define "__AVX512DQ__" : 1 00:02:46.453 Fetching value of define "__AVX512F__" : 1 00:02:46.453 Fetching value of define "__AVX512VL__" : 1 00:02:46.453 Fetching value of define "__PCLMUL__" : 1 00:02:46.453 Fetching value of define "__RDRND__" : 1 00:02:46.453 Fetching value of define "__RDSEED__" : 1 00:02:46.454 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:46.454 Fetching value of define "__znver1__" : (undefined) 00:02:46.454 Fetching value of define "__znver2__" : (undefined) 00:02:46.454 Fetching value of define "__znver3__" : (undefined) 00:02:46.454 Fetching value of define "__znver4__" : (undefined) 00:02:46.454 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:46.454 Message: lib/log: Defining dependency "log" 00:02:46.454 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.454 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.454 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:46.454 Checking for function "getentropy" : YES 00:02:46.454 Message: lib/eal: Defining dependency "eal" 00:02:46.454 Message: lib/ring: Defining dependency "ring" 00:02:46.454 Message: lib/rcu: Defining dependency "rcu" 00:02:46.454 Message: lib/mempool: Defining dependency "mempool" 00:02:46.454 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.454 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.454 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.454 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.454 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.454 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:46.454 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:46.454 Compiler for C supports arguments -mpclmul: YES 00:02:46.454 Compiler for C supports arguments -maes: YES 00:02:46.454 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.454 Compiler for C supports arguments -mavx512bw: YES 00:02:46.454 Compiler for C supports arguments -mavx512dq: YES 00:02:46.454 Compiler for C supports arguments -mavx512vl: YES 00:02:46.454 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.454 Compiler for C supports arguments -mavx2: YES 00:02:46.454 Compiler for C supports arguments -mavx: YES 00:02:46.454 Message: lib/net: Defining dependency "net" 00:02:46.454 Message: lib/meter: Defining dependency "meter" 00:02:46.454 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.454 Message: lib/pci: Defining dependency "pci" 00:02:46.454 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.454 Message: lib/hash: Defining dependency "hash" 00:02:46.454 Message: lib/timer: Defining dependency "timer" 00:02:46.454 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.454 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.454 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.454 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.454 Message: lib/reorder: Defining dependency "reorder" 00:02:46.454 Message: lib/security: Defining dependency "security" 00:02:46.454 Has header "linux/userfaultfd.h" : NO 00:02:46.454 Has header "linux/vduse.h" : NO 00:02:46.454 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:46.454 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.454 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.454 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.454 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:46.454 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:46.454 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:46.454 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:46.454 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:46.454 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:46.454 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:46.454 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.454 Configuring doxy-api-html.conf using configuration 00:02:46.454 Configuring doxy-api-man.conf using configuration 00:02:46.454 Program mandb found: NO 00:02:46.454 Program sphinx-build found: NO 00:02:46.454 Configuring rte_build_config.h using configuration 00:02:46.454 Message: 00:02:46.454 ================= 00:02:46.454 Applications Enabled 00:02:46.454 ================= 00:02:46.454 00:02:46.454 apps: 00:02:46.454 00:02:46.454 00:02:46.454 Message: 00:02:46.454 ================= 00:02:46.454 Libraries Enabled 00:02:46.454 ================= 00:02:46.454 00:02:46.454 libs: 00:02:46.454 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:46.454 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:46.454 cryptodev, dmadev, reorder, security, 00:02:46.454 00:02:46.454 Message: 00:02:46.454 =============== 00:02:46.454 Drivers Enabled 00:02:46.454 =============== 00:02:46.454 00:02:46.454 common: 00:02:46.454 00:02:46.454 bus: 00:02:46.454 pci, vdev, 00:02:46.454 mempool: 00:02:46.454 ring, 00:02:46.454 dma: 00:02:46.454 00:02:46.454 net: 00:02:46.454 00:02:46.454 crypto: 00:02:46.454 00:02:46.454 compress: 00:02:46.454 00:02:46.454 00:02:46.454 Message: 00:02:46.454 ================= 00:02:46.454 Content Skipped 00:02:46.454 ================= 00:02:46.454 00:02:46.454 apps: 00:02:46.454 dumpcap: explicitly disabled via build config 00:02:46.454 graph: explicitly disabled via build config 00:02:46.454 pdump: explicitly disabled via build config 00:02:46.454 proc-info: explicitly disabled via build config 00:02:46.454 test-acl: explicitly disabled via build config 00:02:46.454 test-bbdev: explicitly disabled via build config 00:02:46.454 test-cmdline: explicitly disabled via build config 00:02:46.454 test-compress-perf: explicitly disabled via build config 00:02:46.454 test-crypto-perf: explicitly disabled via build config 00:02:46.454 test-dma-perf: explicitly disabled via build config 00:02:46.454 test-eventdev: explicitly disabled via build config 00:02:46.454 test-fib: explicitly disabled via build config 00:02:46.454 test-flow-perf: explicitly disabled via build config 00:02:46.454 test-gpudev: explicitly disabled via build config 00:02:46.454 test-mldev: explicitly disabled via build config 00:02:46.454 test-pipeline: explicitly disabled via build config 00:02:46.454 test-pmd: explicitly disabled via build config 00:02:46.454 test-regex: explicitly disabled via build config 00:02:46.454 test-sad: explicitly disabled via build config 00:02:46.454 test-security-perf: explicitly disabled via build config 00:02:46.454 00:02:46.454 libs: 00:02:46.454 argparse: explicitly disabled via build config 00:02:46.454 metrics: explicitly disabled via build config 00:02:46.454 acl: explicitly disabled via build config 00:02:46.454 bbdev: explicitly disabled via build config 00:02:46.454 bitratestats: explicitly disabled via build config 00:02:46.454 bpf: explicitly disabled via build config 00:02:46.454 cfgfile: explicitly disabled via build config 00:02:46.454 distributor: explicitly disabled via build config 00:02:46.454 efd: explicitly disabled via build config 00:02:46.454 eventdev: explicitly disabled via build config 00:02:46.454 dispatcher: explicitly disabled via build config 00:02:46.454 gpudev: explicitly disabled via build config 00:02:46.454 gro: explicitly disabled via build config 00:02:46.454 gso: explicitly disabled via build config 00:02:46.454 ip_frag: explicitly disabled via build config 00:02:46.454 jobstats: explicitly disabled via build config 00:02:46.454 latencystats: explicitly disabled via build config 00:02:46.454 lpm: explicitly disabled via build config 00:02:46.454 member: explicitly disabled via build config 00:02:46.454 pcapng: explicitly disabled via build config 00:02:46.454 power: only supported on Linux 00:02:46.454 rawdev: explicitly disabled via build config 00:02:46.454 regexdev: explicitly disabled via build config 00:02:46.454 mldev: explicitly disabled via build config 00:02:46.454 rib: explicitly disabled via build config 00:02:46.454 sched: explicitly disabled via build config 00:02:46.454 stack: explicitly disabled via build config 00:02:46.454 vhost: only supported on Linux 00:02:46.454 ipsec: explicitly disabled via build config 00:02:46.454 pdcp: explicitly disabled via build config 00:02:46.454 fib: explicitly disabled via build config 00:02:46.454 port: explicitly disabled via build config 00:02:46.454 pdump: explicitly disabled via build config 00:02:46.454 table: explicitly disabled via build config 00:02:46.454 pipeline: explicitly disabled via build config 00:02:46.454 graph: explicitly disabled via build config 00:02:46.454 node: explicitly disabled via build config 00:02:46.454 00:02:46.454 drivers: 00:02:46.454 common/cpt: not in enabled drivers build config 00:02:46.454 common/dpaax: not in enabled drivers build config 00:02:46.454 common/iavf: not in enabled drivers build config 00:02:46.454 common/idpf: not in enabled drivers build config 00:02:46.454 common/ionic: not in enabled drivers build config 00:02:46.454 common/mvep: not in enabled drivers build config 00:02:46.454 common/octeontx: not in enabled drivers build config 00:02:46.454 bus/auxiliary: not in enabled drivers build config 00:02:46.454 bus/cdx: not in enabled drivers build config 00:02:46.454 bus/dpaa: not in enabled drivers build config 00:02:46.454 bus/fslmc: not in enabled drivers build config 00:02:46.454 bus/ifpga: not in enabled drivers build config 00:02:46.454 bus/platform: not in enabled drivers build config 00:02:46.454 bus/uacce: not in enabled drivers build config 00:02:46.454 bus/vmbus: not in enabled drivers build config 00:02:46.454 common/cnxk: not in enabled drivers build config 00:02:46.454 common/mlx5: not in enabled drivers build config 00:02:46.454 common/nfp: not in enabled drivers build config 00:02:46.454 common/nitrox: not in enabled drivers build config 00:02:46.454 common/qat: not in enabled drivers build config 00:02:46.454 common/sfc_efx: not in enabled drivers build config 00:02:46.454 mempool/bucket: not in enabled drivers build config 00:02:46.454 mempool/cnxk: not in enabled drivers build config 00:02:46.454 mempool/dpaa: not in enabled drivers build config 00:02:46.454 mempool/dpaa2: not in enabled drivers build config 00:02:46.454 mempool/octeontx: not in enabled drivers build config 00:02:46.454 mempool/stack: not in enabled drivers build config 00:02:46.454 dma/cnxk: not in enabled drivers build config 00:02:46.454 dma/dpaa: not in enabled drivers build config 00:02:46.455 dma/dpaa2: not in enabled drivers build config 00:02:46.455 dma/hisilicon: not in enabled drivers build config 00:02:46.455 dma/idxd: not in enabled drivers build config 00:02:46.455 dma/ioat: not in enabled drivers build config 00:02:46.455 dma/skeleton: not in enabled drivers build config 00:02:46.455 net/af_packet: not in enabled drivers build config 00:02:46.455 net/af_xdp: not in enabled drivers build config 00:02:46.455 net/ark: not in enabled drivers build config 00:02:46.455 net/atlantic: not in enabled drivers build config 00:02:46.455 net/avp: not in enabled drivers build config 00:02:46.455 net/axgbe: not in enabled drivers build config 00:02:46.455 net/bnx2x: not in enabled drivers build config 00:02:46.455 net/bnxt: not in enabled drivers build config 00:02:46.455 net/bonding: not in enabled drivers build config 00:02:46.455 net/cnxk: not in enabled drivers build config 00:02:46.455 net/cpfl: not in enabled drivers build config 00:02:46.455 net/cxgbe: not in enabled drivers build config 00:02:46.455 net/dpaa: not in enabled drivers build config 00:02:46.455 net/dpaa2: not in enabled drivers build config 00:02:46.455 net/e1000: not in enabled drivers build config 00:02:46.455 net/ena: not in enabled drivers build config 00:02:46.455 net/enetc: not in enabled drivers build config 00:02:46.455 net/enetfec: not in enabled drivers build config 00:02:46.455 net/enic: not in enabled drivers build config 00:02:46.455 net/failsafe: not in enabled drivers build config 00:02:46.455 net/fm10k: not in enabled drivers build config 00:02:46.455 net/gve: not in enabled drivers build config 00:02:46.455 net/hinic: not in enabled drivers build config 00:02:46.455 net/hns3: not in enabled drivers build config 00:02:46.455 net/i40e: not in enabled drivers build config 00:02:46.455 net/iavf: not in enabled drivers build config 00:02:46.455 net/ice: not in enabled drivers build config 00:02:46.455 net/idpf: not in enabled drivers build config 00:02:46.455 net/igc: not in enabled drivers build config 00:02:46.455 net/ionic: not in enabled drivers build config 00:02:46.455 net/ipn3ke: not in enabled drivers build config 00:02:46.455 net/ixgbe: not in enabled drivers build config 00:02:46.455 net/mana: not in enabled drivers build config 00:02:46.455 net/memif: not in enabled drivers build config 00:02:46.455 net/mlx4: not in enabled drivers build config 00:02:46.455 net/mlx5: not in enabled drivers build config 00:02:46.455 net/mvneta: not in enabled drivers build config 00:02:46.455 net/mvpp2: not in enabled drivers build config 00:02:46.455 net/netvsc: not in enabled drivers build config 00:02:46.455 net/nfb: not in enabled drivers build config 00:02:46.455 net/nfp: not in enabled drivers build config 00:02:46.455 net/ngbe: not in enabled drivers build config 00:02:46.455 net/null: not in enabled drivers build config 00:02:46.455 net/octeontx: not in enabled drivers build config 00:02:46.455 net/octeon_ep: not in enabled drivers build config 00:02:46.455 net/pcap: not in enabled drivers build config 00:02:46.455 net/pfe: not in enabled drivers build config 00:02:46.455 net/qede: not in enabled drivers build config 00:02:46.455 net/ring: not in enabled drivers build config 00:02:46.455 net/sfc: not in enabled drivers build config 00:02:46.455 net/softnic: not in enabled drivers build config 00:02:46.455 net/tap: not in enabled drivers build config 00:02:46.455 net/thunderx: not in enabled drivers build config 00:02:46.455 net/txgbe: not in enabled drivers build config 00:02:46.455 net/vdev_netvsc: not in enabled drivers build config 00:02:46.455 net/vhost: not in enabled drivers build config 00:02:46.455 net/virtio: not in enabled drivers build config 00:02:46.455 net/vmxnet3: not in enabled drivers build config 00:02:46.455 raw/*: missing internal dependency, "rawdev" 00:02:46.455 crypto/armv8: not in enabled drivers build config 00:02:46.455 crypto/bcmfs: not in enabled drivers build config 00:02:46.455 crypto/caam_jr: not in enabled drivers build config 00:02:46.455 crypto/ccp: not in enabled drivers build config 00:02:46.455 crypto/cnxk: not in enabled drivers build config 00:02:46.455 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.455 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.455 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.455 crypto/mlx5: not in enabled drivers build config 00:02:46.455 crypto/mvsam: not in enabled drivers build config 00:02:46.455 crypto/nitrox: not in enabled drivers build config 00:02:46.455 crypto/null: not in enabled drivers build config 00:02:46.455 crypto/octeontx: not in enabled drivers build config 00:02:46.455 crypto/openssl: not in enabled drivers build config 00:02:46.455 crypto/scheduler: not in enabled drivers build config 00:02:46.455 crypto/uadk: not in enabled drivers build config 00:02:46.455 crypto/virtio: not in enabled drivers build config 00:02:46.455 compress/isal: not in enabled drivers build config 00:02:46.455 compress/mlx5: not in enabled drivers build config 00:02:46.455 compress/nitrox: not in enabled drivers build config 00:02:46.455 compress/octeontx: not in enabled drivers build config 00:02:46.455 compress/zlib: not in enabled drivers build config 00:02:46.455 regex/*: missing internal dependency, "regexdev" 00:02:46.455 ml/*: missing internal dependency, "mldev" 00:02:46.455 vdpa/*: missing internal dependency, "vhost" 00:02:46.455 event/*: missing internal dependency, "eventdev" 00:02:46.455 baseband/*: missing internal dependency, "bbdev" 00:02:46.455 gpu/*: missing internal dependency, "gpudev" 00:02:46.455 00:02:46.455 00:02:46.712 Build targets in project: 81 00:02:46.712 00:02:46.712 DPDK 24.03.0 00:02:46.712 00:02:46.712 User defined options 00:02:46.712 buildtype : debug 00:02:46.712 default_library : static 00:02:46.712 libdir : lib 00:02:46.712 prefix : / 00:02:46.712 c_args : -fPIC -Werror 00:02:46.712 c_link_args : 00:02:46.712 cpu_instruction_set: native 00:02:46.712 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.712 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.712 enable_docs : false 00:02:46.712 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:46.712 enable_kmods : true 00:02:46.712 tests : false 00:02:46.712 00:02:46.712 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:46.969 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.969 [1/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.969 [2/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.969 [3/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.969 [4/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:46.969 [5/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.969 [6/233] Linking static target lib/librte_log.a 00:02:46.969 [7/233] Linking static target lib/librte_kvargs.a 00:02:47.227 [8/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.227 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.227 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:47.227 [11/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:47.227 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:47.227 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:47.227 [14/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:47.227 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.227 [16/233] Linking static target lib/librte_telemetry.a 00:02:47.484 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:47.484 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.484 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:47.484 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:47.484 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:47.484 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:47.484 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:47.484 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:47.484 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.484 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.484 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.742 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.742 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.742 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.742 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.742 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.742 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.742 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.742 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.742 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.742 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:48.000 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:48.000 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.000 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:48.000 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.000 [42/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.000 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.000 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.000 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.258 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.258 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.258 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.259 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.259 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.259 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:48.259 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.259 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.259 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.259 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.259 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.259 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.517 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.517 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:48.517 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:48.517 [61/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:48.517 [62/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.517 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.517 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.517 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:48.517 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:48.517 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:48.517 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:48.775 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:48.775 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:48.775 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:48.775 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.775 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.775 [74/233] Linking static target lib/librte_eal.a 00:02:48.775 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.775 [76/233] Linking static target lib/librte_ring.a 00:02:49.033 [77/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.033 [78/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.033 [79/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.033 [80/233] Linking static target lib/librte_rcu.a 00:02:49.033 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.033 [82/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.033 [83/233] Linking static target lib/librte_mempool.a 00:02:49.033 [84/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.033 [85/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.033 [86/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.033 [87/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.033 [88/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.033 [89/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.291 [90/233] Linking target lib/librte_log.so.24.1 00:02:49.291 [91/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.291 [92/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.291 [93/233] Linking static target lib/librte_mbuf.a 00:02:49.291 [94/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.291 [95/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.291 [96/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.291 [97/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.291 [98/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:49.291 [99/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.291 [100/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.291 [101/233] Linking target lib/librte_kvargs.so.24.1 00:02:49.291 [102/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.291 [103/233] Linking target lib/librte_telemetry.so.24.1 00:02:49.291 [104/233] Linking static target lib/librte_meter.a 00:02:49.548 [105/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.548 [106/233] Linking static target lib/librte_net.a 00:02:49.548 [107/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:49.548 [108/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.548 [109/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.548 [110/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.805 [111/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.805 [112/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.805 [113/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.805 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.805 [115/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.063 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.063 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.063 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.063 [119/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.063 [120/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.063 [121/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.063 [122/233] Linking static target lib/librte_pci.a 00:02:50.063 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.063 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.063 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.321 [126/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.322 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:50.322 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.322 [129/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.322 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.322 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.322 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.322 [133/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.322 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.322 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.322 [136/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.322 [137/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.322 [138/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:50.322 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.322 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.322 [141/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.579 [142/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.579 [143/233] Linking static target lib/librte_cmdline.a 00:02:50.579 [144/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.579 [145/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.579 [146/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:50.579 [147/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.579 [148/233] Linking static target lib/librte_ethdev.a 00:02:50.579 [149/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.579 [150/233] Linking static target lib/librte_timer.a 00:02:50.838 [151/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:50.838 [152/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.838 [153/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.838 [154/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.838 [155/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.838 [156/233] Linking static target lib/librte_hash.a 00:02:50.838 [157/233] Linking static target lib/librte_compressdev.a 00:02:51.096 [158/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.096 [159/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.096 [160/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.096 [161/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.096 [162/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.096 [163/233] Linking static target lib/librte_dmadev.a 00:02:51.096 [164/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.096 [165/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:51.096 [166/233] Linking static target lib/librte_reorder.a 00:02:51.355 [167/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.355 [168/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:51.355 [169/233] Linking static target lib/librte_cryptodev.a 00:02:51.355 [170/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.355 [171/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.355 [172/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:51.355 [173/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.355 [174/233] Linking static target lib/librte_security.a 00:02:51.355 [175/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.355 [176/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.355 [177/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:02:51.355 [178/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:51.613 [179/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.613 [180/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.613 [181/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.613 [182/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:51.613 [183/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.613 [184/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.613 [185/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.613 [186/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.871 [187/233] Linking static target drivers/librte_bus_pci.a 00:02:51.871 [188/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:51.871 [189/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.871 [190/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.871 [191/233] Linking static target drivers/librte_bus_vdev.a 00:02:51.871 [192/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:51.871 [193/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.129 [194/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.129 [195/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.129 [196/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.129 [197/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.129 [198/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.129 [199/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.129 [200/233] Linking static target drivers/librte_mempool_ring.a 00:02:53.505 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:02:53.505 machine -> /usr/src/sys/amd64/include 00:02:53.505 x86 -> /usr/src/sys/x86/include 00:02:53.505 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:02:53.505 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:02:53.505 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:02:53.505 touch opt_global.h 00:02:53.505 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:02:53.505 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:02:53.505 :> export_syms 00:02:53.505 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:02:53.505 objcopy --strip-debug contigmem.ko 00:02:53.763 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:02:53.763 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:02:53.763 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:02:53.763 :> export_syms 00:02:53.763 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:02:53.763 objcopy --strip-debug nic_uio.ko 00:02:56.293 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.888 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.888 [205/233] Linking target lib/librte_eal.so.24.1 00:02:59.146 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.146 [207/233] Linking target lib/librte_ring.so.24.1 00:02:59.146 [208/233] Linking target lib/librte_pci.so.24.1 00:02:59.146 [209/233] Linking target lib/librte_timer.so.24.1 00:02:59.146 [210/233] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.146 [211/233] Linking target lib/librte_meter.so.24.1 00:02:59.146 [212/233] Linking target lib/librte_dmadev.so.24.1 00:02:59.146 [213/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.146 [214/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.146 [215/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.146 [216/233] Linking target lib/librte_mempool.so.24.1 00:02:59.146 [217/233] Linking target lib/librte_rcu.so.24.1 00:02:59.146 [218/233] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.405 [219/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.405 [220/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.405 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.405 [222/233] Linking target lib/librte_mbuf.so.24.1 00:02:59.405 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.747 [224/233] Linking target lib/librte_compressdev.so.24.1 00:02:59.747 [225/233] Linking target lib/librte_reorder.so.24.1 00:02:59.747 [226/233] Linking target lib/librte_net.so.24.1 00:02:59.747 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:02:59.747 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.747 [229/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.747 [230/233] Linking target lib/librte_cmdline.so.24.1 00:02:59.747 [231/233] Linking target lib/librte_security.so.24.1 00:02:59.747 [232/233] Linking target lib/librte_hash.so.24.1 00:02:59.747 [233/233] Linking target lib/librte_ethdev.so.24.1 00:02:59.747 INFO: autodetecting backend as ninja 00:02:59.747 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:00.681 CC lib/ut/ut.o 00:03:00.681 CC lib/log/log.o 00:03:00.681 CC lib/log/log_flags.o 00:03:00.681 CC lib/ut_mock/mock.o 00:03:00.681 CC lib/log/log_deprecated.o 00:03:00.681 LIB libspdk_ut_mock.a 00:03:00.681 LIB libspdk_log.a 00:03:00.681 LIB libspdk_ut.a 00:03:00.681 CC lib/dma/dma.o 00:03:00.681 CC lib/ioat/ioat.o 00:03:00.681 CXX lib/trace_parser/trace.o 00:03:00.681 CC lib/util/base64.o 00:03:00.681 CC lib/util/bit_array.o 00:03:00.681 CC lib/util/cpuset.o 00:03:00.681 CC lib/util/crc16.o 00:03:00.681 CC lib/util/crc32.o 00:03:00.681 CC lib/util/crc32_ieee.o 00:03:00.681 CC lib/util/crc32c.o 00:03:00.682 CC lib/util/crc64.o 00:03:00.682 LIB libspdk_ioat.a 00:03:00.682 CC lib/util/dif.o 00:03:00.682 CC lib/util/fd.o 00:03:00.682 CC lib/util/file.o 00:03:00.682 CC lib/util/hexlify.o 00:03:00.682 CC lib/util/iov.o 00:03:00.682 CC lib/util/math.o 00:03:00.682 LIB libspdk_dma.a 00:03:00.682 CC lib/util/pipe.o 00:03:00.682 CC lib/util/strerror_tls.o 00:03:00.682 CC lib/util/string.o 00:03:00.939 CC lib/util/uuid.o 00:03:00.940 CC lib/util/fd_group.o 00:03:00.940 CC lib/util/xor.o 00:03:00.940 CC lib/util/zipf.o 00:03:00.940 LIB libspdk_util.a 00:03:00.940 CC lib/env_dpdk/env.o 00:03:00.940 CC lib/env_dpdk/memory.o 00:03:00.940 CC lib/env_dpdk/pci.o 00:03:00.940 CC lib/env_dpdk/init.o 00:03:00.940 CC lib/rdma/common.o 00:03:00.940 CC lib/vmd/vmd.o 00:03:00.940 CC lib/idxd/idxd.o 00:03:00.940 CC lib/conf/conf.o 00:03:00.940 CC lib/json/json_parse.o 00:03:01.197 CC lib/json/json_util.o 00:03:01.197 LIB libspdk_conf.a 00:03:01.197 CC lib/rdma/rdma_verbs.o 00:03:01.197 CC lib/json/json_write.o 00:03:01.197 CC lib/vmd/led.o 00:03:01.197 CC lib/env_dpdk/threads.o 00:03:01.197 CC lib/idxd/idxd_user.o 00:03:01.197 CC lib/env_dpdk/pci_ioat.o 00:03:01.197 LIB libspdk_vmd.a 00:03:01.197 LIB libspdk_rdma.a 00:03:01.197 LIB libspdk_json.a 00:03:01.197 CC lib/env_dpdk/pci_virtio.o 00:03:01.197 CC lib/env_dpdk/pci_vmd.o 00:03:01.197 CC lib/env_dpdk/pci_idxd.o 00:03:01.197 CC lib/env_dpdk/pci_event.o 00:03:01.197 CC lib/env_dpdk/sigbus_handler.o 00:03:01.453 CC lib/env_dpdk/pci_dpdk.o 00:03:01.453 LIB libspdk_idxd.a 00:03:01.453 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:01.453 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:01.453 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.453 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.453 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.453 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.453 LIB libspdk_jsonrpc.a 00:03:01.453 LIB libspdk_env_dpdk.a 00:03:01.453 CC lib/rpc/rpc.o 00:03:01.453 LIB libspdk_trace_parser.a 00:03:01.711 LIB libspdk_rpc.a 00:03:01.711 CC lib/trace/trace.o 00:03:01.711 CC lib/trace/trace_rpc.o 00:03:01.711 CC lib/trace/trace_flags.o 00:03:01.711 CC lib/notify/notify.o 00:03:01.711 CC lib/notify/notify_rpc.o 00:03:01.711 CC lib/keyring/keyring.o 00:03:01.711 CC lib/keyring/keyring_rpc.o 00:03:01.711 LIB libspdk_notify.a 00:03:01.711 LIB libspdk_trace.a 00:03:01.711 LIB libspdk_keyring.a 00:03:01.969 CC lib/sock/sock.o 00:03:01.969 CC lib/sock/sock_rpc.o 00:03:01.969 CC lib/thread/iobuf.o 00:03:01.969 CC lib/thread/thread.o 00:03:01.969 LIB libspdk_sock.a 00:03:02.227 LIB libspdk_thread.a 00:03:02.227 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:02.227 CC lib/nvme/nvme_ns_cmd.o 00:03:02.227 CC lib/nvme/nvme_fabric.o 00:03:02.227 CC lib/nvme/nvme_ns.o 00:03:02.227 CC lib/nvme/nvme_ctrlr.o 00:03:02.227 CC lib/nvme/nvme_pcie_common.o 00:03:02.227 CC lib/nvme/nvme_pcie.o 00:03:02.227 CC lib/accel/accel.o 00:03:02.227 CC lib/init/json_config.o 00:03:02.227 CC lib/blob/blobstore.o 00:03:02.227 CC lib/init/subsystem.o 00:03:02.485 CC lib/init/subsystem_rpc.o 00:03:02.485 CC lib/accel/accel_rpc.o 00:03:02.485 CC lib/init/rpc.o 00:03:02.485 CC lib/blob/request.o 00:03:02.485 CC lib/accel/accel_sw.o 00:03:02.485 CC lib/blob/zeroes.o 00:03:02.485 CC lib/blob/blob_bs_dev.o 00:03:02.485 CC lib/nvme/nvme_qpair.o 00:03:02.485 LIB libspdk_init.a 00:03:02.485 CC lib/nvme/nvme.o 00:03:02.485 CC lib/nvme/nvme_quirks.o 00:03:02.485 CC lib/nvme/nvme_transport.o 00:03:02.485 CC lib/nvme/nvme_discovery.o 00:03:02.485 LIB libspdk_accel.a 00:03:02.485 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.485 CC lib/event/app.o 00:03:02.743 CC lib/bdev/bdev.o 00:03:02.743 LIB libspdk_blob.a 00:03:02.743 CC lib/event/reactor.o 00:03:02.743 CC lib/bdev/bdev_rpc.o 00:03:02.743 CC lib/blobfs/blobfs.o 00:03:02.743 CC lib/event/log_rpc.o 00:03:02.743 CC lib/event/app_rpc.o 00:03:02.743 CC lib/event/scheduler_static.o 00:03:02.743 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.001 CC lib/blobfs/tree.o 00:03:03.001 CC lib/bdev/bdev_zone.o 00:03:03.001 CC lib/lvol/lvol.o 00:03:03.001 CC lib/nvme/nvme_tcp.o 00:03:03.001 CC lib/bdev/part.o 00:03:03.001 CC lib/nvme/nvme_opal.o 00:03:03.001 LIB libspdk_event.a 00:03:03.001 CC lib/bdev/scsi_nvme.o 00:03:03.001 CC lib/nvme/nvme_io_msg.o 00:03:03.001 CC lib/nvme/nvme_poll_group.o 00:03:03.001 LIB libspdk_blobfs.a 00:03:03.001 CC lib/nvme/nvme_zns.o 00:03:03.001 CC lib/nvme/nvme_stubs.o 00:03:03.001 CC lib/nvme/nvme_auth.o 00:03:03.001 CC lib/nvme/nvme_rdma.o 00:03:03.001 LIB libspdk_lvol.a 00:03:03.260 LIB libspdk_bdev.a 00:03:03.260 CC lib/scsi/dev.o 00:03:03.260 CC lib/scsi/lun.o 00:03:03.260 CC lib/scsi/scsi.o 00:03:03.260 CC lib/scsi/port.o 00:03:03.260 CC lib/scsi/scsi_bdev.o 00:03:03.260 CC lib/scsi/scsi_pr.o 00:03:03.260 CC lib/scsi/scsi_rpc.o 00:03:03.518 CC lib/scsi/task.o 00:03:03.518 LIB libspdk_scsi.a 00:03:03.518 LIB libspdk_nvme.a 00:03:03.518 CC lib/iscsi/conn.o 00:03:03.518 CC lib/iscsi/init_grp.o 00:03:03.518 CC lib/iscsi/iscsi.o 00:03:03.518 CC lib/iscsi/portal_grp.o 00:03:03.518 CC lib/iscsi/md5.o 00:03:03.518 CC lib/iscsi/param.o 00:03:03.518 CC lib/iscsi/tgt_node.o 00:03:03.518 CC lib/iscsi/iscsi_subsystem.o 00:03:03.518 CC lib/iscsi/iscsi_rpc.o 00:03:03.778 CC lib/nvmf/ctrlr.o 00:03:03.778 CC lib/nvmf/ctrlr_discovery.o 00:03:03.778 CC lib/nvmf/ctrlr_bdev.o 00:03:03.778 CC lib/nvmf/subsystem.o 00:03:03.778 CC lib/nvmf/nvmf.o 00:03:03.778 CC lib/nvmf/nvmf_rpc.o 00:03:03.778 CC lib/nvmf/transport.o 00:03:03.778 CC lib/nvmf/tcp.o 00:03:03.778 CC lib/nvmf/stubs.o 00:03:03.778 CC lib/iscsi/task.o 00:03:03.778 CC lib/nvmf/mdns_server.o 00:03:03.778 CC lib/nvmf/rdma.o 00:03:03.778 CC lib/nvmf/auth.o 00:03:04.036 LIB libspdk_iscsi.a 00:03:04.036 LIB libspdk_nvmf.a 00:03:04.294 CC module/env_dpdk/env_dpdk_rpc.o 00:03:04.294 CC module/keyring/file/keyring.o 00:03:04.294 CC module/keyring/file/keyring_rpc.o 00:03:04.294 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:04.294 CC module/blob/bdev/blob_bdev.o 00:03:04.294 CC module/sock/posix/posix.o 00:03:04.294 CC module/accel/dsa/accel_dsa.o 00:03:04.294 CC module/accel/ioat/accel_ioat.o 00:03:04.294 CC module/accel/error/accel_error.o 00:03:04.294 CC module/accel/iaa/accel_iaa.o 00:03:04.294 LIB libspdk_env_dpdk_rpc.a 00:03:04.294 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.294 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.294 LIB libspdk_scheduler_dynamic.a 00:03:04.294 LIB libspdk_keyring_file.a 00:03:04.294 CC module/accel/error/accel_error_rpc.o 00:03:04.294 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.294 LIB libspdk_blob_bdev.a 00:03:04.552 LIB libspdk_accel_ioat.a 00:03:04.553 LIB libspdk_accel_dsa.a 00:03:04.553 LIB libspdk_accel_error.a 00:03:04.553 LIB libspdk_accel_iaa.a 00:03:04.553 CC module/bdev/error/vbdev_error.o 00:03:04.553 CC module/bdev/null/bdev_null.o 00:03:04.553 CC module/bdev/malloc/bdev_malloc.o 00:03:04.553 CC module/bdev/delay/vbdev_delay.o 00:03:04.553 CC module/bdev/gpt/gpt.o 00:03:04.553 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.553 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.553 CC module/bdev/nvme/bdev_nvme.o 00:03:04.553 LIB libspdk_sock_posix.a 00:03:04.553 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.553 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.553 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.553 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.553 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.553 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.553 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.553 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.553 CC module/bdev/null/bdev_null_rpc.o 00:03:04.810 LIB libspdk_bdev_error.a 00:03:04.810 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.810 LIB libspdk_bdev_delay.a 00:03:04.810 CC module/bdev/nvme/nvme_rpc.o 00:03:04.811 LIB libspdk_blobfs_bdev.a 00:03:04.811 LIB libspdk_bdev_passthru.a 00:03:04.811 LIB libspdk_bdev_malloc.a 00:03:04.811 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.811 LIB libspdk_bdev_null.a 00:03:04.811 LIB libspdk_bdev_gpt.a 00:03:04.811 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.811 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.811 CC module/bdev/split/vbdev_split.o 00:03:04.811 CC module/bdev/raid/bdev_raid.o 00:03:04.811 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.811 CC module/bdev/aio/bdev_aio.o 00:03:04.811 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.811 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.811 LIB libspdk_bdev_lvol.a 00:03:04.811 CC module/bdev/raid/raid0.o 00:03:04.811 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.811 CC module/bdev/raid/raid1.o 00:03:04.811 LIB libspdk_bdev_zone_block.a 00:03:04.811 CC module/bdev/raid/concat.o 00:03:05.069 LIB libspdk_bdev_split.a 00:03:05.069 LIB libspdk_bdev_aio.a 00:03:05.069 LIB libspdk_bdev_nvme.a 00:03:05.069 LIB libspdk_bdev_raid.a 00:03:05.327 CC module/event/subsystems/iobuf/iobuf.o 00:03:05.327 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:05.327 CC module/event/subsystems/vmd/vmd.o 00:03:05.327 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:05.327 CC module/event/subsystems/sock/sock.o 00:03:05.327 CC module/event/subsystems/scheduler/scheduler.o 00:03:05.327 CC module/event/subsystems/keyring/keyring.o 00:03:05.327 LIB libspdk_event_keyring.a 00:03:05.327 LIB libspdk_event_sock.a 00:03:05.327 LIB libspdk_event_scheduler.a 00:03:05.327 LIB libspdk_event_vmd.a 00:03:05.327 LIB libspdk_event_iobuf.a 00:03:05.327 CC module/event/subsystems/accel/accel.o 00:03:05.585 LIB libspdk_event_accel.a 00:03:05.585 CC module/event/subsystems/bdev/bdev.o 00:03:05.585 LIB libspdk_event_bdev.a 00:03:05.912 CC module/event/subsystems/scsi/scsi.o 00:03:05.912 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.912 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.912 LIB libspdk_event_scsi.a 00:03:05.912 LIB libspdk_event_nvmf.a 00:03:05.912 CC module/event/subsystems/iscsi/iscsi.o 00:03:06.182 LIB libspdk_event_iscsi.a 00:03:06.182 CC app/trace_record/trace_record.o 00:03:06.182 TEST_HEADER include/spdk/accel.h 00:03:06.182 TEST_HEADER include/spdk/accel_module.h 00:03:06.182 TEST_HEADER include/spdk/assert.h 00:03:06.182 CXX app/trace/trace.o 00:03:06.182 TEST_HEADER include/spdk/barrier.h 00:03:06.182 TEST_HEADER include/spdk/base64.h 00:03:06.182 TEST_HEADER include/spdk/bdev.h 00:03:06.182 TEST_HEADER include/spdk/bdev_module.h 00:03:06.182 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.182 TEST_HEADER include/spdk/bit_array.h 00:03:06.182 TEST_HEADER include/spdk/bit_pool.h 00:03:06.182 TEST_HEADER include/spdk/blob.h 00:03:06.182 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.182 TEST_HEADER include/spdk/blobfs.h 00:03:06.182 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.182 TEST_HEADER include/spdk/conf.h 00:03:06.182 TEST_HEADER include/spdk/config.h 00:03:06.182 TEST_HEADER include/spdk/cpuset.h 00:03:06.182 CC app/nvmf_tgt/nvmf_main.o 00:03:06.182 TEST_HEADER include/spdk/crc16.h 00:03:06.182 TEST_HEADER include/spdk/crc32.h 00:03:06.182 TEST_HEADER include/spdk/crc64.h 00:03:06.182 TEST_HEADER include/spdk/dif.h 00:03:06.182 TEST_HEADER include/spdk/dma.h 00:03:06.182 TEST_HEADER include/spdk/endian.h 00:03:06.182 TEST_HEADER include/spdk/env.h 00:03:06.182 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.182 TEST_HEADER include/spdk/event.h 00:03:06.182 TEST_HEADER include/spdk/fd.h 00:03:06.182 TEST_HEADER include/spdk/fd_group.h 00:03:06.182 TEST_HEADER include/spdk/file.h 00:03:06.182 TEST_HEADER include/spdk/ftl.h 00:03:06.182 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.182 CC examples/accel/perf/accel_perf.o 00:03:06.182 TEST_HEADER include/spdk/hexlify.h 00:03:06.182 TEST_HEADER include/spdk/histogram_data.h 00:03:06.182 TEST_HEADER include/spdk/idxd.h 00:03:06.182 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.182 TEST_HEADER include/spdk/init.h 00:03:06.182 TEST_HEADER include/spdk/ioat.h 00:03:06.182 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.182 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.182 TEST_HEADER include/spdk/json.h 00:03:06.182 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.182 TEST_HEADER include/spdk/keyring.h 00:03:06.182 CC test/dma/test_dma/test_dma.o 00:03:06.182 TEST_HEADER include/spdk/keyring_module.h 00:03:06.182 TEST_HEADER include/spdk/likely.h 00:03:06.182 TEST_HEADER include/spdk/log.h 00:03:06.182 TEST_HEADER include/spdk/lvol.h 00:03:06.182 TEST_HEADER include/spdk/memory.h 00:03:06.182 TEST_HEADER include/spdk/mmio.h 00:03:06.182 TEST_HEADER include/spdk/nbd.h 00:03:06.182 TEST_HEADER include/spdk/notify.h 00:03:06.182 TEST_HEADER include/spdk/nvme.h 00:03:06.182 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.182 LINK spdk_trace_record 00:03:06.182 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.182 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.182 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.182 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.182 TEST_HEADER include/spdk/nvmf.h 00:03:06.182 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.182 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.182 CC test/accel/dif/dif.o 00:03:06.182 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.182 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.182 CC test/bdev/bdevio/bdevio.o 00:03:06.182 CC test/app/bdev_svc/bdev_svc.o 00:03:06.182 TEST_HEADER include/spdk/opal.h 00:03:06.183 TEST_HEADER include/spdk/opal_spec.h 00:03:06.183 TEST_HEADER include/spdk/pci_ids.h 00:03:06.183 TEST_HEADER include/spdk/pipe.h 00:03:06.183 TEST_HEADER include/spdk/queue.h 00:03:06.183 CC test/blobfs/mkfs/mkfs.o 00:03:06.183 TEST_HEADER include/spdk/reduce.h 00:03:06.183 TEST_HEADER include/spdk/rpc.h 00:03:06.183 TEST_HEADER include/spdk/scheduler.h 00:03:06.183 TEST_HEADER include/spdk/scsi.h 00:03:06.183 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.183 TEST_HEADER include/spdk/sock.h 00:03:06.183 TEST_HEADER include/spdk/stdinc.h 00:03:06.183 TEST_HEADER include/spdk/string.h 00:03:06.183 TEST_HEADER include/spdk/thread.h 00:03:06.183 TEST_HEADER include/spdk/trace.h 00:03:06.183 TEST_HEADER include/spdk/trace_parser.h 00:03:06.183 TEST_HEADER include/spdk/tree.h 00:03:06.439 TEST_HEADER include/spdk/ublk.h 00:03:06.439 TEST_HEADER include/spdk/util.h 00:03:06.439 TEST_HEADER include/spdk/uuid.h 00:03:06.439 TEST_HEADER include/spdk/version.h 00:03:06.439 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.439 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.439 TEST_HEADER include/spdk/vhost.h 00:03:06.439 TEST_HEADER include/spdk/vmd.h 00:03:06.439 TEST_HEADER include/spdk/xor.h 00:03:06.439 TEST_HEADER include/spdk/zipf.h 00:03:06.439 CXX test/cpp_headers/accel.o 00:03:06.439 LINK nvmf_tgt 00:03:06.439 LINK bdev_svc 00:03:06.439 LINK test_dma 00:03:06.439 LINK accel_perf 00:03:06.439 LINK mkfs 00:03:06.439 CXX test/cpp_headers/accel_module.o 00:03:06.439 LINK bdevio 00:03:06.439 LINK dif 00:03:06.439 CC test/env/mem_callbacks/mem_callbacks.o 00:03:06.439 CC test/env/vtophys/vtophys.o 00:03:06.697 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:06.697 CC app/iscsi_tgt/iscsi_tgt.o 00:03:06.697 LINK vtophys 00:03:06.697 CXX test/cpp_headers/assert.o 00:03:06.697 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.697 CC app/spdk_tgt/spdk_tgt.o 00:03:06.697 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.697 CC test/event/event_perf/event_perf.o 00:03:06.697 LINK iscsi_tgt 00:03:06.697 LINK nvme_fuzz 00:03:06.697 CXX test/cpp_headers/barrier.o 00:03:06.697 LINK spdk_tgt 00:03:06.697 LINK spdk_trace 00:03:06.697 LINK hello_bdev 00:03:06.697 CC examples/blob/hello_world/hello_blob.o 00:03:06.697 LINK event_perf 00:03:06.697 LINK mem_callbacks 00:03:06.697 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:06.955 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:06.955 CC examples/ioat/perf/perf.o 00:03:06.955 CXX test/cpp_headers/base64.o 00:03:06.955 LINK bdevperf 00:03:06.955 LINK hello_blob 00:03:06.955 CC test/event/reactor/reactor.o 00:03:06.955 CC app/spdk_lspci/spdk_lspci.o 00:03:06.955 CC test/app/histogram_perf/histogram_perf.o 00:03:06.955 CC app/spdk_nvme_perf/perf.o 00:03:06.955 LINK ioat_perf 00:03:06.955 LINK env_dpdk_post_init 00:03:06.955 LINK reactor 00:03:06.955 LINK histogram_perf 00:03:06.955 LINK spdk_lspci 00:03:06.955 CXX test/cpp_headers/bdev.o 00:03:06.955 CXX test/cpp_headers/bdev_module.o 00:03:06.955 CC test/env/memory/memory_ut.o 00:03:06.955 CC examples/ioat/verify/verify.o 00:03:06.955 CC test/event/reactor_perf/reactor_perf.o 00:03:06.955 CC app/spdk_nvme_identify/identify.o 00:03:06.955 CC examples/blob/cli/blobcli.o 00:03:07.212 CC examples/nvme/hello_world/hello_world.o 00:03:07.212 LINK spdk_nvme_perf 00:03:07.212 LINK reactor_perf 00:03:07.212 LINK verify 00:03:07.212 CXX test/cpp_headers/bdev_zone.o 00:03:07.212 LINK hello_world 00:03:07.212 CC test/env/pci/pci_ut.o 00:03:07.212 LINK blobcli 00:03:07.212 CC test/app/jsoncat/jsoncat.o 00:03:07.212 LINK iscsi_fuzz 00:03:07.212 CC examples/nvme/reconnect/reconnect.o 00:03:07.212 CC examples/sock/hello_world/hello_sock.o 00:03:07.212 LINK spdk_nvme_identify 00:03:07.212 LINK jsoncat 00:03:07.212 CC test/app/stub/stub.o 00:03:07.212 LINK pci_ut 00:03:07.212 CXX test/cpp_headers/bit_array.o 00:03:07.470 LINK hello_sock 00:03:07.470 LINK reconnect 00:03:07.470 gmake[2]: Nothing to be done for 'all'. 00:03:07.470 CXX test/cpp_headers/bit_pool.o 00:03:07.470 CXX test/cpp_headers/blob.o 00:03:07.470 CC test/nvme/aer/aer.o 00:03:07.470 CC test/rpc_client/rpc_client_test.o 00:03:07.470 LINK stub 00:03:07.470 CC app/spdk_nvme_discover/discovery_aer.o 00:03:07.470 CC app/spdk_top/spdk_top.o 00:03:07.470 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:07.470 LINK memory_ut 00:03:07.470 CXX test/cpp_headers/blob_bdev.o 00:03:07.470 LINK rpc_client_test 00:03:07.470 LINK spdk_nvme_discover 00:03:07.470 LINK aer 00:03:07.470 CXX test/cpp_headers/blobfs.o 00:03:07.470 CC examples/vmd/lsvmd/lsvmd.o 00:03:07.470 CC examples/nvme/arbitration/arbitration.o 00:03:07.470 CC test/nvme/reset/reset.o 00:03:07.470 CC examples/nvme/hotplug/hotplug.o 00:03:07.728 LINK lsvmd 00:03:07.728 CC test/nvme/sgl/sgl.o 00:03:07.728 CC examples/vmd/led/led.o 00:03:07.728 CXX test/cpp_headers/blobfs_bdev.o 00:03:07.728 LINK nvme_manage 00:03:07.728 LINK arbitration 00:03:07.728 LINK reset 00:03:07.728 LINK spdk_top 00:03:07.728 CC test/nvme/e2edp/nvme_dp.o 00:03:07.728 LINK led 00:03:07.728 LINK hotplug 00:03:07.728 LINK sgl 00:03:07.728 CC app/fio/nvme/fio_plugin.o 00:03:07.728 CXX test/cpp_headers/conf.o 00:03:07.728 LINK nvme_dp 00:03:07.728 CXX test/cpp_headers/config.o 00:03:07.728 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:07.728 CC app/fio/bdev/fio_plugin.o 00:03:07.728 CC test/nvme/overhead/overhead.o 00:03:07.728 CC examples/util/zipf/zipf.o 00:03:07.728 CC examples/nvmf/nvmf/nvmf.o 00:03:07.728 CC examples/nvme/abort/abort.o 00:03:07.986 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.986 LINK cmb_copy 00:03:07.986 CXX test/cpp_headers/cpuset.o 00:03:07.986 LINK zipf 00:03:07.986 CC examples/thread/thread/thread_ex.o 00:03:07.986 fio_plugin.c:1559:29: LINK overhead 00:03:07.986 warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:07.986 struct spdk_nvme_fdp_ruhs ruhs; 00:03:07.986 ^ 00:03:07.986 LINK abort 00:03:07.986 1 warning generated. 00:03:07.986 LINK spdk_nvme 00:03:07.986 LINK pmr_persistence 00:03:07.986 LINK nvmf 00:03:07.986 LINK spdk_bdev 00:03:07.986 CXX test/cpp_headers/crc16.o 00:03:07.986 CC test/nvme/err_injection/err_injection.o 00:03:07.986 CC examples/idxd/perf/perf.o 00:03:07.986 CC test/thread/poller_perf/poller_perf.o 00:03:07.986 LINK thread 00:03:07.986 CXX test/cpp_headers/crc32.o 00:03:07.986 CC test/nvme/startup/startup.o 00:03:07.986 CC test/thread/lock/spdk_lock.o 00:03:07.986 CXX test/cpp_headers/crc64.o 00:03:08.245 LINK poller_perf 00:03:08.245 LINK err_injection 00:03:08.245 LINK idxd_perf 00:03:08.245 CXX test/cpp_headers/dif.o 00:03:08.245 LINK startup 00:03:08.245 CXX test/cpp_headers/dma.o 00:03:08.245 CC test/nvme/reserve/reserve.o 00:03:08.245 CXX test/cpp_headers/endian.o 00:03:08.245 CC test/nvme/simple_copy/simple_copy.o 00:03:08.245 CC test/nvme/connect_stress/connect_stress.o 00:03:08.245 LINK reserve 00:03:08.245 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:08.245 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:08.245 CXX test/cpp_headers/env.o 00:03:08.245 LINK simple_copy 00:03:08.245 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:08.245 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:08.245 LINK histogram_ut 00:03:08.245 LINK connect_stress 00:03:08.503 LINK spdk_lock 00:03:08.503 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:08.503 CC test/nvme/boot_partition/boot_partition.o 00:03:08.503 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:08.503 CXX test/cpp_headers/env_dpdk.o 00:03:08.503 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:08.503 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:08.503 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:08.503 LINK tree_ut 00:03:08.503 LINK boot_partition 00:03:08.503 CXX test/cpp_headers/event.o 00:03:08.503 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:08.503 LINK blob_bdev_ut 00:03:08.503 CC test/nvme/compliance/nvme_compliance.o 00:03:08.762 LINK scsi_nvme_ut 00:03:08.762 LINK dma_ut 00:03:08.762 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:08.762 CXX test/cpp_headers/fd.o 00:03:08.762 CXX test/cpp_headers/fd_group.o 00:03:08.762 CC test/unit/lib/event/app.c/app_ut.o 00:03:08.762 LINK blobfs_async_ut 00:03:08.762 LINK nvme_compliance 00:03:08.762 LINK blobfs_sync_ut 00:03:08.762 CXX test/cpp_headers/file.o 00:03:08.762 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:08.762 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:08.762 LINK accel_ut 00:03:08.762 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:09.020 CC test/nvme/fused_ordering/fused_ordering.o 00:03:09.020 CXX test/cpp_headers/ftl.o 00:03:09.021 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:09.021 LINK blobfs_bdev_ut 00:03:09.021 LINK app_ut 00:03:09.021 LINK fused_ordering 00:03:09.021 LINK ioat_ut 00:03:09.021 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:09.021 CXX test/cpp_headers/gpt_spec.o 00:03:09.021 LINK part_ut 00:03:09.021 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:09.021 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:09.021 CXX test/cpp_headers/hexlify.o 00:03:09.021 LINK reactor_ut 00:03:09.021 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:09.280 LINK gpt_ut 00:03:09.280 CXX test/cpp_headers/histogram_data.o 00:03:09.280 LINK doorbell_aers 00:03:09.280 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:09.280 CC test/nvme/fdp/fdp.o 00:03:09.280 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:09.280 CXX test/cpp_headers/idxd.o 00:03:09.280 LINK init_grp_ut 00:03:09.280 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:09.280 LINK fdp 00:03:09.280 LINK bdev_ut 00:03:09.280 CXX test/cpp_headers/idxd_spec.o 00:03:09.539 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:09.539 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:09.539 LINK param_ut 00:03:09.539 LINK conn_ut 00:03:09.539 LINK vbdev_lvol_ut 00:03:09.539 CXX test/cpp_headers/init.o 00:03:09.539 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:09.539 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:09.539 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:09.539 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:09.539 CXX test/cpp_headers/ioat.o 00:03:09.539 LINK jsonrpc_server_ut 00:03:09.796 LINK bdev_zone_ut 00:03:09.796 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:09.796 CXX test/cpp_headers/ioat_spec.o 00:03:09.796 LINK portal_grp_ut 00:03:09.796 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:09.796 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:09.796 CXX test/cpp_headers/iscsi_spec.o 00:03:09.796 LINK tgt_node_ut 00:03:10.107 LINK json_util_ut 00:03:10.107 LINK json_parse_ut 00:03:10.107 LINK iscsi_ut 00:03:10.107 CXX test/cpp_headers/json.o 00:03:10.107 LINK blob_ut 00:03:10.107 LINK bdev_raid_sb_ut 00:03:10.107 LINK vbdev_zone_block_ut 00:03:10.107 LINK bdev_raid_ut 00:03:10.107 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:10.107 CXX test/cpp_headers/jsonrpc.o 00:03:10.107 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:10.107 CXX test/cpp_headers/keyring.o 00:03:10.107 CC test/unit/lib/log/log.c/log_ut.o 00:03:10.107 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:10.107 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:10.107 LINK bdev_ut 00:03:10.107 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:10.107 CXX test/cpp_headers/keyring_module.o 00:03:10.107 LINK log_ut 00:03:10.107 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:10.107 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:10.107 CXX test/cpp_headers/likely.o 00:03:10.366 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:10.366 LINK concat_ut 00:03:10.366 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:10.366 LINK raid1_ut 00:03:10.366 CXX test/cpp_headers/log.o 00:03:10.366 LINK json_write_ut 00:03:10.366 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:10.366 LINK notify_ut 00:03:10.366 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:10.366 CXX test/cpp_headers/lvol.o 00:03:10.366 LINK raid0_ut 00:03:10.366 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:10.366 CXX test/cpp_headers/memory.o 00:03:10.624 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:10.624 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:10.624 LINK lvol_ut 00:03:10.624 CXX test/cpp_headers/mmio.o 00:03:10.624 LINK dev_ut 00:03:10.624 CXX test/cpp_headers/nbd.o 00:03:10.624 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:10.624 CXX test/cpp_headers/notify.o 00:03:10.624 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:10.882 LINK nvme_ut 00:03:10.882 CXX test/cpp_headers/nvme.o 00:03:10.882 LINK nvme_ctrlr_cmd_ut 00:03:10.882 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:10.882 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:10.882 CXX test/cpp_headers/nvme_intel.o 00:03:10.882 LINK scsi_ut 00:03:10.882 LINK lun_ut 00:03:10.882 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:10.882 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:11.140 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.140 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:11.140 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:11.140 LINK nvme_ctrlr_ut 00:03:11.140 LINK ctrlr_ut 00:03:11.140 LINK bdev_nvme_ut 00:03:11.140 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.140 LINK nvme_ns_ut 00:03:11.140 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:11.140 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:11.140 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:11.140 LINK tcp_ut 00:03:11.140 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:11.140 CXX test/cpp_headers/nvme_spec.o 00:03:11.398 LINK scsi_bdev_ut 00:03:11.398 LINK ctrlr_bdev_ut 00:03:11.398 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:11.398 CXX test/cpp_headers/nvme_zns.o 00:03:11.398 LINK scsi_pr_ut 00:03:11.398 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:11.398 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:11.398 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:11.398 CXX test/cpp_headers/nvmf.o 00:03:11.398 LINK ctrlr_discovery_ut 00:03:11.657 LINK nvmf_ut 00:03:11.657 LINK subsystem_ut 00:03:11.657 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:11.657 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:11.657 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.657 LINK sock_ut 00:03:11.657 LINK posix_ut 00:03:11.657 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:11.657 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.657 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:11.657 LINK auth_ut 00:03:11.657 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:11.916 CXX test/cpp_headers/nvmf_spec.o 00:03:11.916 LINK nvme_ns_cmd_ut 00:03:11.916 LINK iobuf_ut 00:03:11.916 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:11.916 LINK thread_ut 00:03:11.916 CXX test/cpp_headers/nvmf_transport.o 00:03:11.916 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:11.916 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:11.916 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:11.916 LINK base64_ut 00:03:11.916 CXX test/cpp_headers/opal.o 00:03:12.175 LINK rdma_ut 00:03:12.175 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:12.175 LINK nvme_ns_ocssd_cmd_ut 00:03:12.175 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:12.175 LINK transport_ut 00:03:12.175 CXX test/cpp_headers/opal_spec.o 00:03:12.175 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:12.175 LINK nvme_poll_group_ut 00:03:12.175 LINK bit_array_ut 00:03:12.175 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:12.175 LINK nvme_quirks_ut 00:03:12.175 CXX test/cpp_headers/pci_ids.o 00:03:12.175 LINK cpuset_ut 00:03:12.175 LINK pci_event_ut 00:03:12.175 CXX test/cpp_headers/pipe.o 00:03:12.175 LINK crc16_ut 00:03:12.175 CXX test/cpp_headers/queue.o 00:03:12.175 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:12.433 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:12.433 LINK nvme_pcie_ut 00:03:12.433 CXX test/cpp_headers/reduce.o 00:03:12.433 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:12.433 CXX test/cpp_headers/rpc.o 00:03:12.433 LINK crc32_ieee_ut 00:03:12.433 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:12.433 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:12.433 LINK nvme_qpair_ut 00:03:12.433 LINK crc32c_ut 00:03:12.433 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:12.433 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:12.433 LINK crc64_ut 00:03:12.433 CXX test/cpp_headers/scheduler.o 00:03:12.433 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:12.433 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:12.433 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:12.433 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:12.433 LINK rpc_ut 00:03:12.692 LINK subsystem_ut 00:03:12.692 CXX test/cpp_headers/scsi.o 00:03:12.692 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:12.692 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:12.692 LINK keyring_ut 00:03:12.692 LINK idxd_user_ut 00:03:12.692 LINK iov_ut 00:03:12.692 CXX test/cpp_headers/scsi_spec.o 00:03:12.692 LINK nvme_transport_ut 00:03:12.692 LINK rpc_ut 00:03:12.692 LINK dif_ut 00:03:12.692 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:12.692 CXX test/cpp_headers/sock.o 00:03:12.692 CXX test/cpp_headers/stdinc.o 00:03:12.692 CC test/unit/lib/util/math.c/math_ut.o 00:03:12.692 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:12.692 LINK nvme_tcp_ut 00:03:12.692 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:12.950 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:12.951 LINK math_ut 00:03:12.951 CXX test/cpp_headers/string.o 00:03:12.951 LINK nvme_io_msg_ut 00:03:12.951 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:12.951 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:12.951 CC test/unit/lib/util/string.c/string_ut.o 00:03:12.951 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:12.951 LINK idxd_ut 00:03:12.951 CXX test/cpp_headers/thread.o 00:03:12.951 CXX test/cpp_headers/trace.o 00:03:12.951 LINK common_ut 00:03:12.951 LINK pipe_ut 00:03:12.951 CXX test/cpp_headers/trace_parser.o 00:03:12.951 LINK string_ut 00:03:12.951 LINK nvme_opal_ut 00:03:12.951 CXX test/cpp_headers/tree.o 00:03:13.209 CXX test/cpp_headers/ublk.o 00:03:13.209 CXX test/cpp_headers/util.o 00:03:13.209 LINK xor_ut 00:03:13.209 CXX test/cpp_headers/uuid.o 00:03:13.209 CXX test/cpp_headers/version.o 00:03:13.209 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.209 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.209 CXX test/cpp_headers/vhost.o 00:03:13.209 CXX test/cpp_headers/vmd.o 00:03:13.209 LINK nvme_pcie_common_ut 00:03:13.209 CXX test/cpp_headers/xor.o 00:03:13.209 CXX test/cpp_headers/zipf.o 00:03:13.209 LINK nvme_fabric_ut 00:03:13.467 LINK nvme_rdma_ut 00:03:13.467 00:03:13.467 real 0m57.138s 00:03:13.467 user 3m30.379s 00:03:13.467 sys 0m45.565s 00:03:13.467 10:07:19 unittest_build -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:13.467 ************************************ 00:03:13.467 END TEST unittest_build 00:03:13.467 ************************************ 00:03:13.467 10:07:19 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:13.726 10:07:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:13.726 10:07:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:13.726 10:07:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:13.726 10:07:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.726 10:07:19 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:13.726 10:07:19 -- pm/common@44 -- $ pid=1315 00:03:13.726 10:07:19 -- pm/common@50 -- $ kill -TERM 1315 00:03:13.726 10:07:19 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:13.726 10:07:19 -- nvmf/common.sh@7 -- # uname -s 00:03:13.726 10:07:19 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:13.726 10:07:19 -- nvmf/common.sh@7 -- # return 0 00:03:13.726 10:07:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:13.726 10:07:19 -- spdk/autotest.sh@32 -- # uname -s 00:03:13.726 10:07:19 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:13.726 10:07:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:13.726 10:07:19 -- pm/common@17 -- # local monitor 00:03:13.726 10:07:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:13.726 10:07:19 -- pm/common@25 -- # sleep 1 00:03:13.726 10:07:19 -- pm/common@21 -- # date +%s 00:03:13.726 10:07:19 -- pm/common@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1718014039 00:03:13.726 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1718014039_collect-vmstat.pm.log 00:03:15.103 10:07:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:15.103 10:07:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:15.103 10:07:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:15.103 10:07:20 -- common/autotest_common.sh@10 -- # set +x 00:03:15.103 10:07:20 -- spdk/autotest.sh@59 -- # create_test_list 00:03:15.103 10:07:20 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:15.103 10:07:20 -- common/autotest_common.sh@10 -- # set +x 00:03:15.103 10:07:20 -- spdk/autotest.sh@61 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:03:15.103 10:07:20 -- spdk/autotest.sh@61 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:03:15.103 10:07:20 -- spdk/autotest.sh@61 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:03:15.103 10:07:20 -- spdk/autotest.sh@62 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:03:15.103 10:07:20 -- spdk/autotest.sh@63 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:03:15.103 10:07:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:15.103 10:07:20 -- common/autotest_common.sh@1454 -- # uname 00:03:15.103 10:07:20 -- common/autotest_common.sh@1454 -- # '[' FreeBSD = FreeBSD ']' 00:03:15.103 10:07:20 -- common/autotest_common.sh@1455 -- # kldunload contigmem.ko 00:03:15.103 kldunload: can't find file contigmem.ko 00:03:15.103 10:07:20 -- common/autotest_common.sh@1455 -- # true 00:03:15.103 10:07:20 -- common/autotest_common.sh@1456 -- # '[' -n '' ']' 00:03:15.103 10:07:20 -- common/autotest_common.sh@1462 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:15.103 10:07:20 -- common/autotest_common.sh@1463 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:15.103 10:07:20 -- common/autotest_common.sh@1464 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:15.103 10:07:20 -- common/autotest_common.sh@1465 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:15.103 10:07:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:15.103 10:07:20 -- common/autotest_common.sh@1474 -- # uname 00:03:15.103 10:07:20 -- common/autotest_common.sh@1474 -- # [[ FreeBSD = FreeBSD ]] 00:03:15.103 10:07:20 -- common/autotest_common.sh@1474 -- # sysctl -n kern.ipc.maxsockbuf 00:03:15.103 10:07:20 -- common/autotest_common.sh@1474 -- # (( 2097152 < 4194304 )) 00:03:15.103 10:07:20 -- common/autotest_common.sh@1475 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:15.103 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:15.103 10:07:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:15.103 10:07:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:15.103 10:07:20 -- spdk/autotest.sh@72 -- # hash lcov 00:03:15.103 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:15.103 10:07:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:15.103 10:07:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:15.103 10:07:20 -- common/autotest_common.sh@10 -- # set +x 00:03:15.103 10:07:20 -- spdk/autotest.sh@91 -- # rm -f 00:03:15.103 10:07:20 -- spdk/autotest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:15.103 kldunload: can't find file contigmem.ko 00:03:15.103 kldunload: can't find file nic_uio.ko 00:03:15.103 10:07:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:15.103 10:07:20 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:15.103 10:07:20 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:15.103 10:07:20 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:15.103 10:07:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:15.103 10:07:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:15.103 10:07:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:15.103 10:07:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:15.103 10:07:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:15.103 10:07:20 -- scripts/common.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:15.103 nvme0ns1 is not a block device 00:03:15.103 10:07:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:15.103 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:15.103 10:07:20 -- scripts/common.sh@391 -- # pt= 00:03:15.103 10:07:20 -- scripts/common.sh@392 -- # return 1 00:03:15.103 10:07:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:15.103 1+0 records in 00:03:15.103 1+0 records out 00:03:15.103 1048576 bytes transferred in 0.007789 secs (134621359 bytes/sec) 00:03:15.103 10:07:20 -- spdk/autotest.sh@118 -- # sync 00:03:15.671 10:07:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:15.671 10:07:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:15.671 10:07:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.238 10:07:21 -- spdk/autotest.sh@124 -- # uname -s 00:03:16.238 10:07:21 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:16.238 10:07:21 -- spdk/autotest.sh@128 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:16.238 Contigmem (not present) 00:03:16.238 Buffer Size: not set 00:03:16.238 Num Buffers: not set 00:03:16.238 00:03:16.238 00:03:16.238 Type BDF Vendor Device Driver 00:03:16.238 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:03:16.238 10:07:21 -- spdk/autotest.sh@130 -- # uname -s 00:03:16.238 10:07:21 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:16.238 10:07:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:16.238 10:07:21 -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:16.238 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.238 10:07:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:16.238 10:07:21 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:16.238 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.238 10:07:21 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:16.238 kldunload: can't find file nic_uio.ko 00:03:16.238 hw.nic_uio.bdfs="0:16:0" 00:03:16.238 hw.contigmem.num_buffers="8" 00:03:16.238 hw.contigmem.buffer_size="268435456" 00:03:16.804 10:07:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:16.804 10:07:22 -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:16.804 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:16.804 10:07:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:16.804 10:07:22 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:03:16.804 10:07:22 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:03:16.804 10:07:22 -- common/autotest_common.sh@1576 -- # bdfs=() 00:03:16.804 10:07:22 -- common/autotest_common.sh@1576 -- # local bdfs 00:03:16.804 10:07:22 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:03:16.804 10:07:22 -- common/autotest_common.sh@1512 -- # bdfs=() 00:03:16.804 10:07:22 -- common/autotest_common.sh@1512 -- # local bdfs 00:03:16.804 10:07:22 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.804 10:07:22 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:03:16.804 10:07:22 -- common/autotest_common.sh@1513 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:16.804 10:07:22 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:03:16.804 10:07:22 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:03:16.804 10:07:22 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:03:16.804 10:07:22 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:16.804 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:16.804 10:07:22 -- common/autotest_common.sh@1579 -- # device= 00:03:16.804 10:07:22 -- common/autotest_common.sh@1579 -- # true 00:03:16.804 10:07:22 -- common/autotest_common.sh@1580 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:16.804 10:07:22 -- common/autotest_common.sh@1585 -- # printf '%s\n' 00:03:16.804 10:07:22 -- common/autotest_common.sh@1591 -- # [[ -z '' ]] 00:03:16.804 10:07:22 -- common/autotest_common.sh@1592 -- # return 0 00:03:16.804 10:07:22 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:16.804 10:07:22 -- spdk/autotest.sh@151 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:16.804 10:07:22 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:16.804 10:07:22 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:16.804 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:03:17.064 ************************************ 00:03:17.064 START TEST unittest 00:03:17.064 ************************************ 00:03:17.064 10:07:22 unittest -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:17.064 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:17.064 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.064 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.064 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:17.064 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:17.064 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:17.064 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:17.064 ++ rpc_py=rpc_cmd 00:03:17.064 ++ set -e 00:03:17.064 ++ shopt -s nullglob 00:03:17.064 ++ shopt -s extglob 00:03:17.064 ++ shopt -s inherit_errexit 00:03:17.064 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:03:17.064 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:17.064 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:17.064 +++ CONFIG_WPDK_DIR= 00:03:17.064 +++ CONFIG_ASAN=n 00:03:17.064 +++ CONFIG_VBDEV_COMPRESS=n 00:03:17.064 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:17.064 +++ CONFIG_USDT=n 00:03:17.064 +++ CONFIG_CUSTOMOCF=n 00:03:17.064 +++ CONFIG_PREFIX=/usr/local 00:03:17.064 +++ CONFIG_RBD=n 00:03:17.064 +++ CONFIG_LIBDIR= 00:03:17.064 +++ CONFIG_IDXD=y 00:03:17.064 +++ CONFIG_NVME_CUSE=n 00:03:17.064 +++ CONFIG_SMA=n 00:03:17.064 +++ CONFIG_VTUNE=n 00:03:17.064 +++ CONFIG_TSAN=n 00:03:17.064 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:17.064 +++ CONFIG_VFIO_USER_DIR= 00:03:17.064 +++ CONFIG_PGO_CAPTURE=n 00:03:17.064 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:17.064 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.064 +++ CONFIG_LTO=n 00:03:17.064 +++ CONFIG_ISCSI_INITIATOR=n 00:03:17.064 +++ CONFIG_CET=n 00:03:17.064 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:17.064 +++ CONFIG_OCF_PATH= 00:03:17.064 +++ CONFIG_RDMA_SET_TOS=y 00:03:17.064 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:17.064 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:17.064 +++ CONFIG_UBLK=n 00:03:17.064 +++ CONFIG_ISAL_CRYPTO=y 00:03:17.064 +++ CONFIG_OPENSSL_PATH= 00:03:17.064 +++ CONFIG_OCF=n 00:03:17.064 +++ CONFIG_FUSE=n 00:03:17.064 +++ CONFIG_VTUNE_DIR= 00:03:17.064 +++ CONFIG_FUZZER_LIB= 00:03:17.064 +++ CONFIG_FUZZER=n 00:03:17.064 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:17.064 +++ CONFIG_CRYPTO=n 00:03:17.064 +++ CONFIG_PGO_USE=n 00:03:17.064 +++ CONFIG_VHOST=n 00:03:17.064 +++ CONFIG_DAOS=n 00:03:17.064 +++ CONFIG_DPDK_INC_DIR= 00:03:17.064 +++ CONFIG_DAOS_DIR= 00:03:17.064 +++ CONFIG_UNIT_TESTS=y 00:03:17.064 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:17.064 +++ CONFIG_VIRTIO=n 00:03:17.064 +++ CONFIG_DPDK_UADK=n 00:03:17.064 +++ CONFIG_COVERAGE=n 00:03:17.064 +++ CONFIG_RDMA=y 00:03:17.064 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:17.064 +++ CONFIG_URING_PATH= 00:03:17.064 +++ CONFIG_XNVME=n 00:03:17.064 +++ CONFIG_VFIO_USER=n 00:03:17.064 +++ CONFIG_ARCH=native 00:03:17.064 +++ CONFIG_HAVE_EVP_MAC=y 00:03:17.064 +++ CONFIG_URING_ZNS=n 00:03:17.064 +++ CONFIG_WERROR=y 00:03:17.064 +++ CONFIG_HAVE_LIBBSD=n 00:03:17.064 +++ CONFIG_UBSAN=n 00:03:17.064 +++ CONFIG_IPSEC_MB_DIR= 00:03:17.064 +++ CONFIG_GOLANG=n 00:03:17.064 +++ CONFIG_ISAL=y 00:03:17.064 +++ CONFIG_IDXD_KERNEL=n 00:03:17.064 +++ CONFIG_DPDK_LIB_DIR= 00:03:17.064 +++ CONFIG_RDMA_PROV=verbs 00:03:17.064 +++ CONFIG_APPS=y 00:03:17.064 +++ CONFIG_SHARED=n 00:03:17.064 +++ CONFIG_HAVE_KEYUTILS=n 00:03:17.064 +++ CONFIG_FC_PATH= 00:03:17.064 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:17.064 +++ CONFIG_FC=n 00:03:17.064 +++ CONFIG_AVAHI=n 00:03:17.064 +++ CONFIG_FIO_PLUGIN=y 00:03:17.064 +++ CONFIG_RAID5F=n 00:03:17.064 +++ CONFIG_EXAMPLES=y 00:03:17.064 +++ CONFIG_TESTS=y 00:03:17.064 +++ CONFIG_CRYPTO_MLX5=n 00:03:17.064 +++ CONFIG_MAX_LCORES= 00:03:17.064 +++ CONFIG_IPSEC_MB=n 00:03:17.064 +++ CONFIG_PGO_DIR= 00:03:17.064 +++ CONFIG_DEBUG=y 00:03:17.064 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:17.064 +++ CONFIG_CROSS_PREFIX= 00:03:17.064 +++ CONFIG_URING=n 00:03:17.064 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:17.064 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:17.064 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:03:17.064 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:03:17.064 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:03:17.064 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:17.064 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:03:17.064 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:17.064 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:17.064 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:17.064 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:17.064 +++ VHOST_APP=("$_app_dir/vhost") 00:03:17.064 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:17.064 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:17.064 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:17.064 +++ [[ #ifndef SPDK_CONFIG_H 00:03:17.064 #define SPDK_CONFIG_H 00:03:17.064 #define SPDK_CONFIG_APPS 1 00:03:17.064 #define SPDK_CONFIG_ARCH native 00:03:17.064 #undef SPDK_CONFIG_ASAN 00:03:17.064 #undef SPDK_CONFIG_AVAHI 00:03:17.064 #undef SPDK_CONFIG_CET 00:03:17.064 #undef SPDK_CONFIG_COVERAGE 00:03:17.064 #define SPDK_CONFIG_CROSS_PREFIX 00:03:17.064 #undef SPDK_CONFIG_CRYPTO 00:03:17.064 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:17.064 #undef SPDK_CONFIG_CUSTOMOCF 00:03:17.064 #undef SPDK_CONFIG_DAOS 00:03:17.064 #define SPDK_CONFIG_DAOS_DIR 00:03:17.064 #define SPDK_CONFIG_DEBUG 1 00:03:17.064 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:17.064 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:17.064 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:17.064 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:17.064 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:17.064 #undef SPDK_CONFIG_DPDK_UADK 00:03:17.064 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.064 #define SPDK_CONFIG_EXAMPLES 1 00:03:17.064 #undef SPDK_CONFIG_FC 00:03:17.064 #define SPDK_CONFIG_FC_PATH 00:03:17.064 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:17.064 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:17.064 #undef SPDK_CONFIG_FUSE 00:03:17.064 #undef SPDK_CONFIG_FUZZER 00:03:17.064 #define SPDK_CONFIG_FUZZER_LIB 00:03:17.064 #undef SPDK_CONFIG_GOLANG 00:03:17.064 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:17.064 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:17.064 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:17.064 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:17.064 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:17.064 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:17.064 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:17.064 #define SPDK_CONFIG_IDXD 1 00:03:17.064 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:17.064 #undef SPDK_CONFIG_IPSEC_MB 00:03:17.064 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:17.064 #define SPDK_CONFIG_ISAL 1 00:03:17.064 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:17.065 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:17.065 #define SPDK_CONFIG_LIBDIR 00:03:17.065 #undef SPDK_CONFIG_LTO 00:03:17.065 #define SPDK_CONFIG_MAX_LCORES 00:03:17.065 #undef SPDK_CONFIG_NVME_CUSE 00:03:17.065 #undef SPDK_CONFIG_OCF 00:03:17.065 #define SPDK_CONFIG_OCF_PATH 00:03:17.065 #define SPDK_CONFIG_OPENSSL_PATH 00:03:17.065 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:17.065 #define SPDK_CONFIG_PGO_DIR 00:03:17.065 #undef SPDK_CONFIG_PGO_USE 00:03:17.065 #define SPDK_CONFIG_PREFIX /usr/local 00:03:17.065 #undef SPDK_CONFIG_RAID5F 00:03:17.065 #undef SPDK_CONFIG_RBD 00:03:17.065 #define SPDK_CONFIG_RDMA 1 00:03:17.065 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:17.065 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:17.065 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:17.065 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:17.065 #undef SPDK_CONFIG_SHARED 00:03:17.065 #undef SPDK_CONFIG_SMA 00:03:17.065 #define SPDK_CONFIG_TESTS 1 00:03:17.065 #undef SPDK_CONFIG_TSAN 00:03:17.065 #undef SPDK_CONFIG_UBLK 00:03:17.065 #undef SPDK_CONFIG_UBSAN 00:03:17.065 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:17.065 #undef SPDK_CONFIG_URING 00:03:17.065 #define SPDK_CONFIG_URING_PATH 00:03:17.065 #undef SPDK_CONFIG_URING_ZNS 00:03:17.065 #undef SPDK_CONFIG_USDT 00:03:17.065 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:17.065 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:17.065 #undef SPDK_CONFIG_VFIO_USER 00:03:17.065 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:17.065 #undef SPDK_CONFIG_VHOST 00:03:17.065 #undef SPDK_CONFIG_VIRTIO 00:03:17.065 #undef SPDK_CONFIG_VTUNE 00:03:17.065 #define SPDK_CONFIG_VTUNE_DIR 00:03:17.065 #define SPDK_CONFIG_WERROR 1 00:03:17.065 #define SPDK_CONFIG_WPDK_DIR 00:03:17.065 #undef SPDK_CONFIG_XNVME 00:03:17.065 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:17.065 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:17.065 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:17.065 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:17.065 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:17.065 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:17.065 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:17.065 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:17.065 ++++ export PATH 00:03:17.065 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:17.065 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:17.065 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:17.065 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:17.065 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:17.065 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:17.065 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:17.065 +++ TEST_TAG=N/A 00:03:17.065 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:17.065 +++ PM_OUTPUTDIR=/usr/home/vagrant/spdk_repo/spdk/../output/power 00:03:17.065 ++++ uname -s 00:03:17.065 +++ PM_OS=FreeBSD 00:03:17.065 +++ MONITOR_RESOURCES_SUDO=() 00:03:17.065 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:17.065 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:17.065 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:17.065 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:17.065 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:17.065 +++ SUDO[0]= 00:03:17.065 +++ SUDO[1]='sudo -E' 00:03:17.065 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:17.065 +++ [[ FreeBSD == FreeBSD ]] 00:03:17.065 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:17.065 +++ [[ ! -d /usr/home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:17.065 ++ : 0 00:03:17.065 ++ export RUN_NIGHTLY 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_RUN_VALGRIND 00:03:17.065 ++ : 1 00:03:17.065 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:17.065 ++ : 1 00:03:17.065 ++ export SPDK_TEST_UNITTEST 00:03:17.065 ++ : 00:03:17.065 ++ export SPDK_TEST_AUTOBUILD 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_RELEASE_BUILD 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_ISAL 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_ISCSI 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:17.065 ++ : 1 00:03:17.065 ++ export SPDK_TEST_NVME 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVME_PMR 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVME_BP 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVME_CLI 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVME_CUSE 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVME_FDP 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVMF 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_VFIOUSER 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_FUZZER 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_FUZZER_SHORT 00:03:17.065 ++ : rdma 00:03:17.065 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_RBD 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_VHOST 00:03:17.065 ++ : 1 00:03:17.065 ++ export SPDK_TEST_BLOCKDEV 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_IOAT 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_BLOBFS 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_VHOST_INIT 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_LVOL 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_RUN_ASAN 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_RUN_UBSAN 00:03:17.065 ++ : 00:03:17.065 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_RUN_NON_ROOT 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_CRYPTO 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_FTL 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_OCF 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_VMD 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_OPAL 00:03:17.065 ++ : 00:03:17.065 ++ export SPDK_TEST_NATIVE_DPDK 00:03:17.065 ++ : true 00:03:17.065 ++ export SPDK_AUTOTEST_X 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_RAID5 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_URING 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_USDT 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_USE_IGB_UIO 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_SCHEDULER 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_SCANBUILD 00:03:17.065 ++ : 00:03:17.065 ++ export SPDK_TEST_NVMF_NICS 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_SMA 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_DAOS 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_XNVME 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_ACCEL_DSA 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_ACCEL_IAA 00:03:17.065 ++ : 00:03:17.065 ++ export SPDK_TEST_FUZZER_TARGET 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_TEST_NVMF_MDNS 00:03:17.065 ++ : 0 00:03:17.065 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:17.065 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:03:17.065 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:03:17.065 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:17.065 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:17.065 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:17.065 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:17.065 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:17.065 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:17.065 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:17.065 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:17.065 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:03:17.065 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:03:17.065 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:17.065 ++ PYTHONDONTWRITEBYTECODE=1 00:03:17.065 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:17.065 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:17.065 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:17.065 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:17.065 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:17.065 ++ rm -rf /var/tmp/asan_suppression_file 00:03:17.065 ++ cat 00:03:17.065 ++ echo leak:libfuse3.so 00:03:17.065 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:17.065 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:17.065 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:17.065 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:17.065 ++ '[' -z /var/spdk/dependencies ']' 00:03:17.065 ++ export DEPENDENCY_DIR 00:03:17.065 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:17.065 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:17.065 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:17.065 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:17.065 ++ export QEMU_BIN= 00:03:17.065 ++ QEMU_BIN= 00:03:17.065 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:17.065 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:17.065 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:17.065 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:17.065 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:17.065 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:17.065 ++ '[' 0 -eq 0 ']' 00:03:17.065 ++ export valgrind= 00:03:17.065 ++ valgrind= 00:03:17.065 +++ uname -s 00:03:17.065 ++ '[' FreeBSD = Linux ']' 00:03:17.065 +++ uname -s 00:03:17.065 ++ '[' FreeBSD = FreeBSD ']' 00:03:17.065 ++ MAKE=gmake 00:03:17.065 +++ sysctl -a 00:03:17.065 +++ grep -E -i hw.ncpu 00:03:17.065 +++ awk '{print $2}' 00:03:17.065 ++ MAKEFLAGS=-j10 00:03:17.065 ++ HUGEMEM=2048 00:03:17.065 ++ export HUGEMEM=2048 00:03:17.065 ++ HUGEMEM=2048 00:03:17.065 ++ NO_HUGE=() 00:03:17.065 ++ TEST_MODE= 00:03:17.065 ++ [[ -z '' ]] 00:03:17.065 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:17.065 ++ exec 00:03:17.065 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:17.065 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:17.065 ++ set_test_storage 2147483648 00:03:17.065 ++ [[ -v testdir ]] 00:03:17.065 ++ local requested_size=2147483648 00:03:17.065 ++ local mount target_dir 00:03:17.065 ++ local -A mounts fss sizes avails uses 00:03:17.065 ++ local source fs size avail mount use 00:03:17.065 ++ local storage_fallback storage_candidates 00:03:17.065 +++ mktemp -udt spdk.XXXXXX 00:03:17.065 ++ storage_fallback=/tmp/spdk.XXXXXX.Apbq5Rss 00:03:17.065 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:17.065 ++ [[ -n '' ]] 00:03:17.065 ++ [[ -n '' ]] 00:03:17.065 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.Apbq5Rss/tests/unit /tmp/spdk.XXXXXX.Apbq5Rss 00:03:17.065 ++ requested_size=2214592512 00:03:17.065 ++ read -r source fs size use avail _ mount 00:03:17.065 +++ df -T 00:03:17.065 +++ grep -v Filesystem 00:03:17.065 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:03:17.065 ++ fss["$mount"]=ufs 00:03:17.065 ++ avails["$mount"]=17218994176 00:03:17.065 ++ sizes["$mount"]=31182712832 00:03:17.065 ++ uses["$mount"]=11469103104 00:03:17.065 ++ read -r source fs size use avail _ mount 00:03:17.065 ++ mounts["$mount"]=devfs 00:03:17.065 ++ fss["$mount"]=devfs 00:03:17.065 ++ avails["$mount"]=0 00:03:17.065 ++ sizes["$mount"]=1024 00:03:17.065 ++ uses["$mount"]=1024 00:03:17.065 ++ read -r source fs size use avail _ mount 00:03:17.065 ++ mounts["$mount"]=tmpfs 00:03:17.065 ++ fss["$mount"]=tmpfs 00:03:17.065 ++ avails["$mount"]=2147442688 00:03:17.065 ++ sizes["$mount"]=2147483648 00:03:17.065 ++ uses["$mount"]=40960 00:03:17.065 ++ read -r source fs size use avail _ mount 00:03:17.065 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest_2/freebsd13-libvirt/output 00:03:17.065 ++ fss["$mount"]=fusefs.sshfs 00:03:17.065 ++ avails["$mount"]=93576855552 00:03:17.065 ++ sizes["$mount"]=105088212992 00:03:17.065 ++ uses["$mount"]=6125924352 00:03:17.065 ++ read -r source fs size use avail _ mount 00:03:17.065 ++ printf '* Looking for test storage...\n' 00:03:17.065 * Looking for test storage... 00:03:17.065 ++ local target_space new_size 00:03:17.065 ++ for target_dir in "${storage_candidates[@]}" 00:03:17.065 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.065 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:17.065 ++ mount=/ 00:03:17.065 ++ target_space=17218994176 00:03:17.065 ++ (( target_space == 0 || target_space < requested_size )) 00:03:17.065 ++ (( target_space >= requested_size )) 00:03:17.065 ++ [[ ufs == tmpfs ]] 00:03:17.065 ++ [[ ufs == ramfs ]] 00:03:17.065 ++ [[ / == / ]] 00:03:17.065 ++ new_size=13683695616 00:03:17.065 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:17.065 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.065 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.065 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.065 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:17.065 ++ return 0 00:03:17.065 ++ set -o errtrace 00:03:17.065 ++ shopt -s extdebug 00:03:17.065 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:17.065 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@1686 -- # true 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@1688 -- # xtrace_fd 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@29 -- # exec 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:17.065 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:17.065 10:07:22 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:17.065 10:07:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:17.065 ************************************ 00:03:17.065 START TEST unittest_pci_event 00:03:17.065 ************************************ 00:03:17.065 10:07:22 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:17.065 00:03:17.065 00:03:17.065 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.065 http://cunit.sourceforge.net/ 00:03:17.065 00:03:17.065 00:03:17.065 Suite: pci_event 00:03:17.065 Test: test_pci_parse_event ...passed 00:03:17.065 00:03:17.065 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.065 suites 1 1 n/a 0 0 00:03:17.065 tests 1 1 1 0 0 00:03:17.065 asserts 1 1 1 0 n/a 00:03:17.065 00:03:17.065 Elapsed time = 0.000 seconds 00:03:17.065 00:03:17.065 real 0m0.024s 00:03:17.065 user 0m0.000s 00:03:17.065 sys 0m0.012s 00:03:17.065 10:07:22 unittest.unittest_pci_event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:17.065 10:07:22 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:17.065 ************************************ 00:03:17.065 END TEST unittest_pci_event 00:03:17.065 ************************************ 00:03:17.066 10:07:22 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:17.066 10:07:22 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:17.066 10:07:22 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:17.066 10:07:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:17.326 ************************************ 00:03:17.326 START TEST unittest_include 00:03:17.326 ************************************ 00:03:17.326 10:07:22 unittest.unittest_include -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:17.326 00:03:17.326 00:03:17.326 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.326 http://cunit.sourceforge.net/ 00:03:17.326 00:03:17.326 00:03:17.326 Suite: histogram 00:03:17.326 Test: histogram_test ...passed 00:03:17.326 Test: histogram_merge ...passed 00:03:17.326 00:03:17.326 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.326 suites 1 1 n/a 0 0 00:03:17.326 tests 2 2 2 0 0 00:03:17.326 asserts 50 50 50 0 n/a 00:03:17.326 00:03:17.326 Elapsed time = 0.000 seconds 00:03:17.326 00:03:17.326 real 0m0.008s 00:03:17.326 user 0m0.000s 00:03:17.326 sys 0m0.007s 00:03:17.326 10:07:22 unittest.unittest_include -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:17.326 ************************************ 00:03:17.326 END TEST unittest_include 00:03:17.326 ************************************ 00:03:17.326 10:07:22 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:17.326 10:07:22 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:17.326 10:07:22 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:17.326 10:07:22 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:17.326 10:07:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:17.326 ************************************ 00:03:17.326 START TEST unittest_bdev 00:03:17.326 ************************************ 00:03:17.326 10:07:22 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # unittest_bdev 00:03:17.326 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:17.326 00:03:17.326 00:03:17.326 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.326 http://cunit.sourceforge.net/ 00:03:17.326 00:03:17.326 00:03:17.326 Suite: bdev 00:03:17.326 Test: bytes_to_blocks_test ...passed 00:03:17.326 Test: num_blocks_test ...passed 00:03:17.326 Test: io_valid_test ...passed 00:03:17.327 Test: open_write_test ...[2024-06-10 10:07:22.730130] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.730369] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.730385] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:17.327 passed 00:03:17.327 Test: claim_test ...passed 00:03:17.327 Test: alias_add_del_test ...[2024-06-10 10:07:22.732745] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:17.327 [2024-06-10 10:07:22.732773] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4610:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:17.327 [2024-06-10 10:07:22.732784] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:17.327 passed 00:03:17.327 Test: get_device_stat_test ...passed 00:03:17.327 Test: bdev_io_types_test ...passed 00:03:17.327 Test: bdev_io_wait_test ...passed 00:03:17.327 Test: bdev_io_spans_split_test ...passed 00:03:17.327 Test: bdev_io_boundary_split_test ...passed 00:03:17.327 Test: bdev_io_max_size_and_segment_split_test ...[2024-06-10 10:07:22.740164] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:17.327 passed 00:03:17.327 Test: bdev_io_mix_split_test ...passed 00:03:17.327 Test: bdev_io_split_with_io_wait ...passed 00:03:17.327 Test: bdev_io_write_unit_split_test ...[2024-06-10 10:07:22.745324] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:17.327 [2024-06-10 10:07:22.745388] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:17.327 [2024-06-10 10:07:22.745409] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:17.327 [2024-06-10 10:07:22.745431] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:17.327 passed 00:03:17.327 Test: bdev_io_alignment_with_boundary ...passed 00:03:17.327 Test: bdev_io_alignment ...passed 00:03:17.327 Test: bdev_histograms ...passed 00:03:17.327 Test: bdev_write_zeroes ...passed 00:03:17.327 Test: bdev_compare_and_write ...passed 00:03:17.327 Test: bdev_compare ...passed 00:03:17.327 Test: bdev_compare_emulated ...passed 00:03:17.327 Test: bdev_zcopy_write ...passed 00:03:17.327 Test: bdev_zcopy_read ...passed 00:03:17.327 Test: bdev_open_while_hotremove ...passed 00:03:17.327 Test: bdev_close_while_hotremove ...passed 00:03:17.327 Test: bdev_open_ext_test ...passed 00:03:17.327 Test: bdev_open_ext_unregister ...[2024-06-10 10:07:22.765935] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:17.327 [2024-06-10 10:07:22.765986] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:17.327 passed 00:03:17.327 Test: bdev_set_io_timeout ...passed 00:03:17.327 Test: bdev_set_qd_sampling ...passed 00:03:17.327 Test: lba_range_overlap ...passed 00:03:17.327 Test: lock_lba_range_check_ranges ...passed 00:03:17.327 Test: lock_lba_range_with_io_outstanding ...passed 00:03:17.327 Test: lock_lba_range_overlapped ...passed 00:03:17.327 Test: bdev_quiesce ...[2024-06-10 10:07:22.777714] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10064:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:17.327 passed 00:03:17.327 Test: bdev_io_abort ...passed 00:03:17.327 Test: bdev_unmap ...passed 00:03:17.327 Test: bdev_write_zeroes_split_test ...passed 00:03:17.327 Test: bdev_set_options_test ...passed 00:03:17.327 Test: bdev_get_memory_domains ...passed 00:03:17.327 Test: bdev_io_ext ...[2024-06-10 10:07:22.783299] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:17.327 passed 00:03:17.327 Test: bdev_io_ext_no_opts ...passed 00:03:17.327 Test: bdev_io_ext_invalid_opts ...passed 00:03:17.327 Test: bdev_io_ext_split ...passed 00:03:17.327 Test: bdev_io_ext_bounce_buffer ...passed 00:03:17.327 Test: bdev_register_uuid_alias ...[2024-06-10 10:07:22.793781] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 3ae747c5-2711-11ef-b084-113036b5c18d already exists 00:03:17.327 [2024-06-10 10:07:22.793820] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:3ae747c5-2711-11ef-b084-113036b5c18d alias for bdev bdev0 00:03:17.327 passed 00:03:17.327 Test: bdev_unregister_by_name ...passed 00:03:17.327 Test: for_each_bdev_test ...passed 00:03:17.327 Test: bdev_seek_test ...[2024-06-10 10:07:22.794169] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7931:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:17.327 [2024-06-10 10:07:22.794182] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:17.327 passed 00:03:17.327 Test: bdev_copy ...passed 00:03:17.327 Test: bdev_copy_split_test ...passed 00:03:17.327 Test: examine_locks ...passed 00:03:17.327 Test: claim_v2_rwo ...passed 00:03:17.327 Test: claim_v2_rom ...[2024-06-10 10:07:22.800467] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800507] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8665:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800527] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800547] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800566] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800594] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8661:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:17.327 passed 00:03:17.327 Test: claim_v2_rwm ...[2024-06-10 10:07:22.800667] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800688] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800707] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800725] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800747] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:17.327 [2024-06-10 10:07:22.800766] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8699:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:17.327 [2024-06-10 10:07:22.800806] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8734:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:17.327 [2024-06-10 10:07:22.800832] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800852] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800870] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:17.327 passed 00:03:17.327 Test: claim_v2_existing_writer ...passed 00:03:17.327 Test: claim_v2_existing_v1 ...passed 00:03:17.327 Test: claim_v1_existing_v2 ...[2024-06-10 10:07:22.800889] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800908] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8753:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.800930] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8734:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:17.327 [2024-06-10 10:07:22.800972] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8699:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:17.327 [2024-06-10 10:07:22.800991] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8699:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:17.327 [2024-06-10 10:07:22.801031] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.801050] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.801068] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:17.327 passed 00:03:17.327 Test: examine_claimed ...passed 00:03:17.327 00:03:17.327 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.327 suites 1 1 n/a 0 0 00:03:17.327 tests 59 59 59 0 0 00:03:17.327 asserts 4599 4599 4599 0 n/a 00:03:17.327 00:03:17.327 Elapsed time = 0.070 seconds 00:03:17.327 [2024-06-10 10:07:22.801108] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.801129] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.801150] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:17.327 [2024-06-10 10:07:22.801222] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:17.327 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:17.327 00:03:17.327 00:03:17.327 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.327 http://cunit.sourceforge.net/ 00:03:17.327 00:03:17.327 00:03:17.327 Suite: nvme 00:03:17.327 Test: test_create_ctrlr ...passed 00:03:17.327 Test: test_reset_ctrlr ...[2024-06-10 10:07:22.810921] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.327 passed 00:03:17.327 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:17.327 Test: test_failover_ctrlr ...passed 00:03:17.327 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:03:17.328 Test: test_pending_reset ...[2024-06-10 10:07:22.811327] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.811355] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.811373] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.811479] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.811500] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 passed 00:03:17.328 Test: test_attach_ctrlr ...passed 00:03:17.328 Test: test_aer_cb ...passed 00:03:17.328 Test: test_submit_nvme_cmd ...[2024-06-10 10:07:22.811558] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:17.328 passed 00:03:17.328 Test: test_add_remove_trid ...passed 00:03:17.328 Test: test_abort ...passed 00:03:17.328 Test: test_get_io_qpair ...passed 00:03:17.328 Test: test_bdev_unregister ...[2024-06-10 10:07:22.811892] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7447:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:17.328 passed 00:03:17.328 Test: test_compare_ns ...passed 00:03:17.328 Test: test_init_ana_log_page ...passed 00:03:17.328 Test: test_get_memory_domains ...passed 00:03:17.328 Test: test_reconnect_qpair ...passed 00:03:17.328 Test: test_create_bdev_ctrlr ...[2024-06-10 10:07:22.812201] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.812265] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5373:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:17.328 passed 00:03:17.328 Test: test_add_multi_ns_to_bdev ...[2024-06-10 10:07:22.812419] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4564:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:17.328 passed 00:03:17.328 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:17.328 Test: test_admin_path ...passed 00:03:17.328 Test: test_reset_bdev_ctrlr ...passed 00:03:17.328 Test: test_find_io_path ...passed 00:03:17.328 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:17.328 Test: test_retry_io_for_io_path_error ...passed 00:03:17.328 Test: test_retry_io_count ...passed 00:03:17.328 Test: test_concurrent_read_ana_log_page ...passed 00:03:17.328 Test: test_retry_io_for_ana_error ...passed 00:03:17.328 Test: test_check_io_error_resiliency_params ...[2024-06-10 10:07:22.812951] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6067:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:17.328 [2024-06-10 10:07:22.812965] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6071:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:17.328 [2024-06-10 10:07:22.812974] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:17.328 [2024-06-10 10:07:22.812982] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6083:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:17.328 [2024-06-10 10:07:22.812991] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6095:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:17.328 [2024-06-10 10:07:22.813003] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6095:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:17.328 [2024-06-10 10:07:22.813012] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6075:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:17.328 passed 00:03:17.328 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:03:17.328 Test: test_reconnect_ctrlr ...[2024-06-10 10:07:22.813020] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6090:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:17.328 [2024-06-10 10:07:22.813028] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6087:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:17.328 [2024-06-10 10:07:22.813111] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813130] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813161] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813175] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 passed 00:03:17.328 Test: test_retry_failover_ctrlr ...passed 00:03:17.328 Test: test_fail_path ...passed 00:03:17.328 Test: test_nvme_ns_cmp ...passed 00:03:17.328 Test: test_ana_transition ...passed 00:03:17.328 Test: test_set_preferred_path ...[2024-06-10 10:07:22.813189] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813229] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813275] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813291] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813306] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813319] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 [2024-06-10 10:07:22.813332] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 passed 00:03:17.328 Test: test_find_next_io_path ...passed 00:03:17.328 Test: test_find_io_path_min_qd ...passed 00:03:17.328 Test: test_disable_auto_failback ...[2024-06-10 10:07:22.813462] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 passed 00:03:17.328 Test: test_set_multipath_policy ...passed 00:03:17.328 Test: test_uuid_generation ...passed 00:03:17.328 Test: test_retry_io_to_same_path ...passed 00:03:17.328 Test: test_race_between_reset_and_disconnected ...passed 00:03:17.328 Test: test_ctrlr_op_rpc ...passed 00:03:17.328 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:17.328 Test: test_disable_enable_ctrlr ...[2024-06-10 10:07:22.843802] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 passed 00:03:17.328 Test: test_delete_ctrlr_done ...[2024-06-10 10:07:22.844089] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:17.328 passed 00:03:17.328 Test: test_ns_remove_during_reset ...passed 00:03:17.328 00:03:17.328 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.328 suites 1 1 n/a 0 0 00:03:17.328 tests 48 48 48 0 0 00:03:17.328 asserts 3565 3565 3565 0 n/a 00:03:17.328 00:03:17.328 Elapsed time = 0.016 seconds 00:03:17.328 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:17.328 00:03:17.328 00:03:17.328 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.328 http://cunit.sourceforge.net/ 00:03:17.328 00:03:17.328 Test Options 00:03:17.328 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:17.328 00:03:17.328 Suite: raid 00:03:17.328 Test: test_create_raid ...passed 00:03:17.328 Test: test_create_raid_superblock ...passed 00:03:17.328 Test: test_delete_raid ...passed 00:03:17.328 Test: test_create_raid_invalid_args ...[2024-06-10 10:07:22.855483] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:17.328 [2024-06-10 10:07:22.856209] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:17.328 [2024-06-10 10:07:22.856294] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:17.328 [2024-06-10 10:07:22.856322] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:17.328 [2024-06-10 10:07:22.856333] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:17.328 [2024-06-10 10:07:22.856734] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:17.328 [2024-06-10 10:07:22.856769] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:17.328 passed 00:03:17.328 Test: test_delete_raid_invalid_args ...passed 00:03:17.328 Test: test_io_channel ...passed 00:03:17.328 Test: test_reset_io ...passed 00:03:17.328 Test: test_multi_raid ...passed 00:03:17.328 Test: test_io_type_supported ...passed 00:03:17.328 Test: test_raid_json_dump_info ...passed 00:03:17.328 Test: test_context_size ...passed 00:03:17.328 Test: test_raid_level_conversions ...passed 00:03:17.328 Test: test_raid_io_split ...passed 00:03:17.328 Test: test_raid_process ...passed 00:03:17.328 00:03:17.328 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.328 suites 1 1 n/a 0 0 00:03:17.328 tests 14 14 14 0 0 00:03:17.328 asserts 6183 6183 6183 0 n/a 00:03:17.328 00:03:17.328 Elapsed time = 0.000 seconds 00:03:17.328 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:17.328 00:03:17.328 00:03:17.328 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.328 http://cunit.sourceforge.net/ 00:03:17.328 00:03:17.328 00:03:17.328 Suite: raid_sb 00:03:17.328 Test: test_raid_bdev_write_superblock ...passed 00:03:17.328 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:17.328 Test: test_raid_bdev_parse_superblock ...passed 00:03:17.328 Suite: raid_sb_md 00:03:17.328 Test: test_raid_bdev_write_superblock ...passed 00:03:17.329 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:17.329 Test: test_raid_bdev_parse_superblock ...passed[2024-06-10 10:07:22.862724] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:17.329 [2024-06-10 10:07:22.862898] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:17.329 00:03:17.329 Suite: raid_sb_md_interleaved 00:03:17.329 Test: test_raid_bdev_write_superblock ...passed 00:03:17.329 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:17.329 Test: test_raid_bdev_parse_superblock ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 3 3 n/a 0 0 00:03:17.329 tests 9 9 9 0 0 00:03:17.329 asserts 139 139 139 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.000 seconds 00:03:17.329 [2024-06-10 10:07:22.862976] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: concat 00:03:17.329 Test: test_concat_start ...passed 00:03:17.329 Test: test_concat_rw ...passed 00:03:17.329 Test: test_concat_null_payload ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 1 1 n/a 0 0 00:03:17.329 tests 3 3 3 0 0 00:03:17.329 asserts 8460 8460 8460 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.000 seconds 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: raid0 00:03:17.329 Test: test_write_io ...passed 00:03:17.329 Test: test_read_io ...passed 00:03:17.329 Test: test_unmap_io ...passed 00:03:17.329 Test: test_io_failure ...passed 00:03:17.329 Suite: raid0_dif 00:03:17.329 Test: test_write_io ...passed 00:03:17.329 Test: test_read_io ...passed 00:03:17.329 Test: test_unmap_io ...passed 00:03:17.329 Test: test_io_failure ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 2 2 n/a 0 0 00:03:17.329 tests 8 8 8 0 0 00:03:17.329 asserts 368291 368291 368291 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.008 seconds 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: raid1 00:03:17.329 Test: test_raid1_start ...passed 00:03:17.329 Test: test_raid1_read_balancing ...passed 00:03:17.329 Test: test_raid1_write_error ...passed 00:03:17.329 Test: test_raid1_read_error ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 1 1 n/a 0 0 00:03:17.329 tests 4 4 4 0 0 00:03:17.329 asserts 4374 4374 4374 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.000 seconds 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: zone 00:03:17.329 Test: test_zone_get_operation ...passed 00:03:17.329 Test: test_bdev_zone_get_info ...passed 00:03:17.329 Test: test_bdev_zone_management ...passed 00:03:17.329 Test: test_bdev_zone_append ...passed 00:03:17.329 Test: test_bdev_zone_append_with_md ...passed 00:03:17.329 Test: test_bdev_zone_appendv ...passed 00:03:17.329 Test: test_bdev_zone_appendv_with_md ...passed 00:03:17.329 Test: test_bdev_io_get_append_location ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 1 1 n/a 0 0 00:03:17.329 tests 8 8 8 0 0 00:03:17.329 asserts 94 94 94 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.000 seconds 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: gpt_parse 00:03:17.329 Test: test_parse_mbr_and_primary ...[2024-06-10 10:07:22.897085] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:17.329 [2024-06-10 10:07:22.897261] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:17.329 [2024-06-10 10:07:22.897286] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:17.329 [2024-06-10 10:07:22.897296] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:17.329 [2024-06-10 10:07:22.897308] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:17.329 [2024-06-10 10:07:22.897318] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:17.329 passed 00:03:17.329 Test: test_parse_secondary ...passed 00:03:17.329 Test: test_check_mbr ...passed 00:03:17.329 Test: test_read_header ...[2024-06-10 10:07:22.897449] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:17.329 [2024-06-10 10:07:22.897458] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:17.329 [2024-06-10 10:07:22.897469] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:17.329 [2024-06-10 10:07:22.897478] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:17.329 [2024-06-10 10:07:22.897605] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:17.329 [2024-06-10 10:07:22.897615] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:17.329 [2024-06-10 10:07:22.897629] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:17.329 [2024-06-10 10:07:22.897640] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:17.329 [2024-06-10 10:07:22.897658] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:17.329 passed 00:03:17.329 Test: test_read_partitions ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 1 1 n/a 0 0 00:03:17.329 tests 5 5 5 0 0 00:03:17.329 asserts 33 33 33 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.000 seconds 00:03:17.329 [2024-06-10 10:07:22.897668] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:17.329 [2024-06-10 10:07:22.897680] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:17.329 [2024-06-10 10:07:22.897689] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:17.329 [2024-06-10 10:07:22.897703] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:17.329 [2024-06-10 10:07:22.897713] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:17.329 [2024-06-10 10:07:22.897722] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:17.329 [2024-06-10 10:07:22.897731] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:17.329 [2024-06-10 10:07:22.897798] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: bdev_part 00:03:17.329 Test: part_test ...[2024-06-10 10:07:22.905993] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:03:17.329 passed 00:03:17.329 Test: part_free_test ...passed 00:03:17.329 Test: part_get_io_channel_test ...passed 00:03:17.329 Test: part_construct_ext ...passed 00:03:17.329 00:03:17.329 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.329 suites 1 1 n/a 0 0 00:03:17.329 tests 4 4 4 0 0 00:03:17.329 asserts 48 48 48 0 n/a 00:03:17.329 00:03:17.329 Elapsed time = 0.008 seconds 00:03:17.329 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:17.329 00:03:17.329 00:03:17.329 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.329 http://cunit.sourceforge.net/ 00:03:17.329 00:03:17.329 00:03:17.329 Suite: scsi_nvme_suite 00:03:17.329 Test: scsi_nvme_translate_test ...passed 00:03:17.329 00:03:17.330 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.330 suites 1 1 n/a 0 0 00:03:17.330 tests 1 1 1 0 0 00:03:17.330 asserts 104 104 104 0 n/a 00:03:17.330 00:03:17.330 Elapsed time = 0.000 seconds 00:03:17.330 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:17.330 00:03:17.330 00:03:17.330 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.330 http://cunit.sourceforge.net/ 00:03:17.330 00:03:17.330 00:03:17.330 Suite: lvol 00:03:17.330 Test: ut_lvs_init ...passed 00:03:17.330 Test: ut_lvol_init ...passed 00:03:17.330 Test: ut_lvol_snapshot ...passed 00:03:17.330 Test: ut_lvol_clone ...passed 00:03:17.330 Test: ut_lvs_destroy ...passed 00:03:17.330 Test: ut_lvs_unload ...passed 00:03:17.330 Test: ut_lvol_resize ...[2024-06-10 10:07:22.918785] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:17.330 [2024-06-10 10:07:22.918980] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:17.330 [2024-06-10 10:07:22.919069] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:17.330 passed 00:03:17.330 Test: ut_lvol_set_read_only ...passed 00:03:17.330 Test: ut_lvol_hotremove ...passed 00:03:17.330 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:17.330 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:17.330 Test: ut_lvol_read_write ...passed 00:03:17.330 Test: ut_vbdev_lvol_submit_request ...passed 00:03:17.330 Test: ut_lvol_examine_config ...passed 00:03:17.330 Test: ut_lvol_examine_disk ...passed 00:03:17.330 Test: ut_lvol_rename ...passed 00:03:17.330 Test: ut_bdev_finish ...passed 00:03:17.330 Test: ut_lvs_rename ...passed 00:03:17.330 Test: ut_lvol_seek ...passed 00:03:17.330 Test: ut_esnap_dev_create ...[2024-06-10 10:07:22.919143] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:17.330 [2024-06-10 10:07:22.919178] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:17.330 [2024-06-10 10:07:22.919187] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:17.330 [2024-06-10 10:07:22.919222] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:17.330 [2024-06-10 10:07:22.919231] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:17.330 [2024-06-10 10:07:22.919240] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:17.330 [2024-06-10 10:07:22.919261] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1912:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:03:17.330 passed 00:03:17.330 Test: ut_lvol_esnap_clone_bad_args ...passed 00:03:17.330 Test: ut_lvol_shallow_copy ...passed 00:03:17.330 Test: ut_lvol_set_external_parent ...passed 00:03:17.330 00:03:17.330 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.330 suites 1 1 n/a 0 0 00:03:17.330 tests 23 23 23 0 0 00:03:17.330 asserts 798 798 798 0 n/a 00:03:17.330 00:03:17.330 Elapsed time = 0.000 seconds 00:03:17.330 [2024-06-10 10:07:22.919281] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:17.330 [2024-06-10 10:07:22.919290] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:03:17.330 [2024-06-10 10:07:22.919311] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:17.330 [2024-06-10 10:07:22.919319] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:17.330 [2024-06-10 10:07:22.919333] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:03:17.330 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:17.590 00:03:17.590 00:03:17.590 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.590 http://cunit.sourceforge.net/ 00:03:17.590 00:03:17.590 00:03:17.590 Suite: zone_block 00:03:17.590 Test: test_zone_block_create ...passed 00:03:17.590 Test: test_zone_block_create_invalid ...[2024-06-10 10:07:22.930742] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:17.590 passed 00:03:17.590 Test: test_get_zone_info ...[2024-06-10 10:07:22.930926] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-10 10:07:22.930946] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:17.590 [2024-06-10 10:07:22.930958] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-10 10:07:22.930971] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:17.590 [2024-06-10 10:07:22.930981] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-06-10 10:07:22.930991] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:17.590 [2024-06-10 10:07:22.931001] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:03:17.590 Test: test_supported_io_types ...passed 00:03:17.590 Test: test_reset_zone ...passed 00:03:17.590 Test: test_open_zone ...[2024-06-10 10:07:22.931065] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931086] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931098] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931154] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931167] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931200] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 passed 00:03:17.590 Test: test_zone_write ...[2024-06-10 10:07:22.931421] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931443] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931482] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:17.590 [2024-06-10 10:07:22.931493] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.931506] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:17.590 [2024-06-10 10:07:22.931515] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932055] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:17.590 [2024-06-10 10:07:22.932074] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932087] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:17.590 [2024-06-10 10:07:22.932097] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 passed 00:03:17.590 Test: test_zone_read ...[2024-06-10 10:07:22.932693] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:17.590 [2024-06-10 10:07:22.932710] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932745] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:17.590 [2024-06-10 10:07:22.932755] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 passed 00:03:17.590 Test: test_close_zone ...[2024-06-10 10:07:22.932768] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:17.590 [2024-06-10 10:07:22.932778] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932827] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:17.590 [2024-06-10 10:07:22.932837] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932865] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932881] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932921] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 [2024-06-10 10:07:22.932933] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.590 passed 00:03:17.590 Test: test_finish_zone ...passed 00:03:17.590 Test: test_append_zone ...[2024-06-10 10:07:22.932991] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.591 [2024-06-10 10:07:22.933004] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.591 [2024-06-10 10:07:22.933034] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:17.591 [2024-06-10 10:07:22.933044] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.591 [2024-06-10 10:07:22.933057] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:17.591 [2024-06-10 10:07:22.933067] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.591 passed 00:03:17.591 00:03:17.591 [2024-06-10 10:07:22.934203] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:17.591 [2024-06-10 10:07:22.934221] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:17.591 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.591 suites 1 1 n/a 0 0 00:03:17.591 tests 11 11 11 0 0 00:03:17.591 asserts 3437 3437 3437 0 n/a 00:03:17.591 00:03:17.591 Elapsed time = 0.008 seconds 00:03:17.591 10:07:22 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:17.591 00:03:17.591 00:03:17.591 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.591 http://cunit.sourceforge.net/ 00:03:17.591 00:03:17.591 00:03:17.591 Suite: bdev 00:03:17.591 Test: basic ...[2024-06-10 10:07:22.944039] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24af29): Operation not permitted (rc=-1) 00:03:17.591 [2024-06-10 10:07:22.944279] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82d26b480 (0x24af20): Operation not permitted (rc=-1) 00:03:17.591 [2024-06-10 10:07:22.944297] thread.c:2370:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24af29): Operation not permitted (rc=-1) 00:03:17.591 passed 00:03:17.591 Test: unregister_and_close ...passed 00:03:17.591 Test: unregister_and_close_different_threads ...passed 00:03:17.591 Test: basic_qos ...passed 00:03:17.591 Test: put_channel_during_reset ...passed 00:03:17.591 Test: aborted_reset ...passed 00:03:17.591 Test: aborted_reset_no_outstanding_io ...passed 00:03:17.591 Test: io_during_reset ...passed 00:03:17.591 Test: reset_completions ...passed 00:03:17.591 Test: io_during_qos_queue ...passed 00:03:17.591 Test: io_during_qos_reset ...passed 00:03:17.591 Test: enomem ...passed 00:03:17.591 Test: enomem_multi_bdev ...passed 00:03:17.591 Test: enomem_multi_bdev_unregister ...passed 00:03:17.591 Test: enomem_multi_io_target ...passed 00:03:17.591 Test: qos_dynamic_enable ...passed 00:03:17.591 Test: bdev_histograms_mt ...passed 00:03:17.591 Test: bdev_set_io_timeout_mt ...passed 00:03:17.591 Test: lock_lba_range_then_submit_io ...[2024-06-10 10:07:22.978749] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x82d26b600 not unregistered 00:03:17.591 [2024-06-10 10:07:22.979874] thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x24af08 already registered (old:0x82d26b600 new:0x82d26b780) 00:03:17.591 passed 00:03:17.591 Test: unregister_during_reset ...passed 00:03:17.591 Test: event_notify_and_close ...passed 00:03:17.591 Test: unregister_and_qos_poller ...passed 00:03:17.591 Suite: bdev_wrong_thread 00:03:17.591 Test: spdk_bdev_register_wt ...passed 00:03:17.591 Test: spdk_bdev_examine_wt ...passed 00:03:17.591 00:03:17.591 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.591 suites 2 2 n/a 0 0 00:03:17.591 tests 24 24 24 0 0 00:03:17.591 asserts 621 621 621 0 n/a 00:03:17.591 00:03:17.591 Elapsed time = 0.047 seconds 00:03:17.591 [2024-06-10 10:07:22.985934] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8460:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x82d234380 (0x82d234380) 00:03:17.591 [2024-06-10 10:07:22.985982] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82d234380 (0x82d234380) 00:03:17.591 00:03:17.591 real 0m0.269s 00:03:17.591 user 0m0.165s 00:03:17.591 sys 0m0.083s 00:03:17.591 10:07:22 unittest.unittest_bdev -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:17.591 10:07:22 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:17.591 ************************************ 00:03:17.591 END TEST unittest_bdev 00:03:17.591 ************************************ 00:03:17.591 10:07:23 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:17.591 10:07:23 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:17.591 10:07:23 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:17.591 10:07:23 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:17.591 10:07:23 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:17.591 10:07:23 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:17.591 10:07:23 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:17.591 10:07:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:17.591 ************************************ 00:03:17.591 START TEST unittest_blob_blobfs 00:03:17.591 ************************************ 00:03:17.591 10:07:23 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # unittest_blob 00:03:17.591 10:07:23 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:17.591 10:07:23 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:17.591 00:03:17.591 00:03:17.591 CUnit - A unit testing framework for C - Version 2.1-3 00:03:17.591 http://cunit.sourceforge.net/ 00:03:17.591 00:03:17.591 00:03:17.591 Suite: blob_nocopy_noextent 00:03:17.591 Test: blob_init ...[2024-06-10 10:07:23.050713] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:17.591 passed 00:03:17.591 Test: blob_thin_provision ...passed 00:03:17.591 Test: blob_read_only ...passed 00:03:17.591 Test: bs_load ...[2024-06-10 10:07:23.125775] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:17.591 passed 00:03:17.591 Test: bs_load_custom_cluster_size ...passed 00:03:17.591 Test: bs_load_after_failed_grow ...passed 00:03:17.591 Test: bs_cluster_sz ...[2024-06-10 10:07:23.145874] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:17.591 [2024-06-10 10:07:23.145943] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:17.591 [2024-06-10 10:07:23.145959] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:17.591 passed 00:03:17.591 Test: bs_resize_md ...passed 00:03:17.591 Test: bs_destroy ...passed 00:03:17.851 Test: bs_type ...passed 00:03:17.851 Test: bs_super_block ...passed 00:03:17.851 Test: bs_test_recover_cluster_count ...passed 00:03:17.851 Test: bs_grow_live ...passed 00:03:17.851 Test: bs_grow_live_no_space ...passed 00:03:17.851 Test: bs_test_grow ...passed 00:03:17.851 Test: blob_serialize_test ...passed 00:03:17.851 Test: super_block_crc ...passed 00:03:17.851 Test: blob_thin_prov_write_count_io ...passed 00:03:17.851 Test: blob_thin_prov_unmap_cluster ...passed 00:03:17.851 Test: bs_load_iter_test ...passed 00:03:17.851 Test: blob_relations ...[2024-06-10 10:07:23.291760] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.851 [2024-06-10 10:07:23.291838] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.851 [2024-06-10 10:07:23.291954] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.851 [2024-06-10 10:07:23.291966] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.851 passed 00:03:17.851 Test: blob_relations2 ...[2024-06-10 10:07:23.302447] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.851 [2024-06-10 10:07:23.302483] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.851 [2024-06-10 10:07:23.302495] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.851 [2024-06-10 10:07:23.302503] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.851 [2024-06-10 10:07:23.302631] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.851 [2024-06-10 10:07:23.302642] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.851 [2024-06-10 10:07:23.302682] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.851 [2024-06-10 10:07:23.302692] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.851 passed 00:03:17.851 Test: blob_relations3 ...passed 00:03:17.851 Test: blobstore_clean_power_failure ...passed 00:03:17.851 Test: blob_delete_snapshot_power_failure ...[2024-06-10 10:07:23.437930] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:17.851 [2024-06-10 10:07:23.449352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:17.851 [2024-06-10 10:07:23.449428] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:17.851 [2024-06-10 10:07:23.449444] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:18.110 [2024-06-10 10:07:23.460319] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:18.110 [2024-06-10 10:07:23.460390] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:18.111 [2024-06-10 10:07:23.460405] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:18.111 [2024-06-10 10:07:23.460434] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:18.111 [2024-06-10 10:07:23.471378] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:18.111 [2024-06-10 10:07:23.471474] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:18.111 [2024-06-10 10:07:23.482223] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:18.111 [2024-06-10 10:07:23.482303] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:18.111 [2024-06-10 10:07:23.493013] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:18.111 [2024-06-10 10:07:23.493083] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:18.111 passed 00:03:18.111 Test: blob_create_snapshot_power_failure ...[2024-06-10 10:07:23.524969] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:18.111 [2024-06-10 10:07:23.544833] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:18.111 [2024-06-10 10:07:23.554706] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:18.111 passed 00:03:18.111 Test: blob_io_unit ...passed 00:03:18.111 Test: blob_io_unit_compatibility ...passed 00:03:18.111 Test: blob_ext_md_pages ...passed 00:03:18.111 Test: blob_esnap_io_4096_4096 ...passed 00:03:18.111 Test: blob_esnap_io_512_512 ...passed 00:03:18.111 Test: blob_esnap_io_4096_512 ...passed 00:03:18.111 Test: blob_esnap_io_512_4096 ...passed 00:03:18.370 Test: blob_esnap_clone_resize ...passed 00:03:18.370 Suite: blob_bs_nocopy_noextent 00:03:18.370 Test: blob_open ...passed 00:03:18.370 Test: blob_create ...[2024-06-10 10:07:23.764984] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:18.370 passed 00:03:18.370 Test: blob_create_loop ...passed 00:03:18.370 Test: blob_create_fail ...[2024-06-10 10:07:23.837684] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:18.370 passed 00:03:18.370 Test: blob_create_internal ...passed 00:03:18.370 Test: blob_create_zero_extent ...passed 00:03:18.370 Test: blob_snapshot ...passed 00:03:18.370 Test: blob_clone ...passed 00:03:18.629 Test: blob_inflate ...[2024-06-10 10:07:23.990132] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:18.629 passed 00:03:18.629 Test: blob_delete ...passed 00:03:18.629 Test: blob_resize_test ...[2024-06-10 10:07:24.045650] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:18.629 passed 00:03:18.629 Test: blob_resize_thin_test ...passed 00:03:18.629 Test: channel_ops ...passed 00:03:18.629 Test: blob_super ...passed 00:03:18.630 Test: blob_rw_verify_iov ...passed 00:03:18.630 Test: blob_unmap ...passed 00:03:18.888 Test: blob_iter ...passed 00:03:18.888 Test: blob_parse_md ...passed 00:03:18.888 Test: bs_load_pending_removal ...passed 00:03:18.888 Test: bs_unload ...[2024-06-10 10:07:24.305639] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:18.888 passed 00:03:18.888 Test: bs_usable_clusters ...passed 00:03:18.888 Test: blob_crc ...[2024-06-10 10:07:24.363170] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:18.888 [2024-06-10 10:07:24.363225] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:18.888 passed 00:03:18.888 Test: blob_flags ...passed 00:03:18.888 Test: bs_version ...passed 00:03:18.888 Test: blob_set_xattrs_test ...[2024-06-10 10:07:24.451569] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:18.888 [2024-06-10 10:07:24.451667] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:18.888 passed 00:03:19.146 Test: blob_thin_prov_alloc ...passed 00:03:19.146 Test: blob_insert_cluster_msg_test ...passed 00:03:19.146 Test: blob_thin_prov_rw ...passed 00:03:19.146 Test: blob_thin_prov_rle ...passed 00:03:19.146 Test: blob_thin_prov_rw_iov ...passed 00:03:19.146 Test: blob_snapshot_rw ...passed 00:03:19.146 Test: blob_snapshot_rw_iov ...passed 00:03:19.406 Test: blob_inflate_rw ...passed 00:03:19.406 Test: blob_snapshot_freeze_io ...passed 00:03:19.406 Test: blob_operation_split_rw ...passed 00:03:19.406 Test: blob_operation_split_rw_iov ...passed 00:03:19.406 Test: blob_simultaneous_operations ...[2024-06-10 10:07:24.890291] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.406 [2024-06-10 10:07:24.890352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.406 [2024-06-10 10:07:24.890584] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.406 [2024-06-10 10:07:24.890597] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.406 [2024-06-10 10:07:24.893632] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.406 [2024-06-10 10:07:24.893658] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.406 [2024-06-10 10:07:24.893673] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.406 [2024-06-10 10:07:24.893679] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.406 passed 00:03:19.406 Test: blob_persist_test ...passed 00:03:19.406 Test: blob_decouple_snapshot ...passed 00:03:19.666 Test: blob_seek_io_unit ...passed 00:03:19.666 Test: blob_nested_freezes ...passed 00:03:19.666 Test: blob_clone_resize ...passed 00:03:19.666 Test: blob_shallow_copy ...[2024-06-10 10:07:25.090215] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:19.666 [2024-06-10 10:07:25.090329] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:19.666 [2024-06-10 10:07:25.090348] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:19.666 passed 00:03:19.666 Suite: blob_blob_nocopy_noextent 00:03:19.666 Test: blob_write ...passed 00:03:19.666 Test: blob_read ...passed 00:03:19.666 Test: blob_rw_verify ...passed 00:03:19.666 Test: blob_rw_verify_iov_nomem ...passed 00:03:19.666 Test: blob_rw_iov_read_only ...passed 00:03:19.925 Test: blob_xattr ...passed 00:03:19.925 Test: blob_dirty_shutdown ...passed 00:03:19.925 Test: blob_is_degraded ...passed 00:03:19.925 Suite: blob_esnap_bs_nocopy_noextent 00:03:19.925 Test: blob_esnap_create ...passed 00:03:19.925 Test: blob_esnap_thread_add_remove ...passed 00:03:19.925 Test: blob_esnap_clone_snapshot ...passed 00:03:19.925 Test: blob_esnap_clone_inflate ...passed 00:03:19.925 Test: blob_esnap_clone_decouple ...passed 00:03:19.925 Test: blob_esnap_clone_reload ...passed 00:03:20.184 Test: blob_esnap_hotplug ...passed 00:03:20.185 Test: blob_set_parent ...[2024-06-10 10:07:25.558501] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:20.185 [2024-06-10 10:07:25.558555] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:20.185 [2024-06-10 10:07:25.558572] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:20.185 [2024-06-10 10:07:25.558580] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:20.185 [2024-06-10 10:07:25.558647] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:20.185 passed 00:03:20.185 Test: blob_set_external_parent ...[2024-06-10 10:07:25.586795] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:20.185 [2024-06-10 10:07:25.586830] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:20.185 [2024-06-10 10:07:25.586837] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:20.185 [2024-06-10 10:07:25.586871] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:20.185 passed 00:03:20.185 Suite: blob_nocopy_extent 00:03:20.185 Test: blob_init ...[2024-06-10 10:07:25.596946] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:20.185 passed 00:03:20.185 Test: blob_thin_provision ...passed 00:03:20.185 Test: blob_read_only ...passed 00:03:20.185 Test: bs_load ...[2024-06-10 10:07:25.634466] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:20.185 passed 00:03:20.185 Test: bs_load_custom_cluster_size ...passed 00:03:20.185 Test: bs_load_after_failed_grow ...passed 00:03:20.185 Test: bs_cluster_sz ...[2024-06-10 10:07:25.653238] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:20.185 [2024-06-10 10:07:25.653282] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:20.185 [2024-06-10 10:07:25.653292] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:20.185 passed 00:03:20.185 Test: bs_resize_md ...passed 00:03:20.185 Test: bs_destroy ...passed 00:03:20.185 Test: bs_type ...passed 00:03:20.185 Test: bs_super_block ...passed 00:03:20.185 Test: bs_test_recover_cluster_count ...passed 00:03:20.185 Test: bs_grow_live ...passed 00:03:20.185 Test: bs_grow_live_no_space ...passed 00:03:20.185 Test: bs_test_grow ...passed 00:03:20.185 Test: blob_serialize_test ...passed 00:03:20.185 Test: super_block_crc ...passed 00:03:20.185 Test: blob_thin_prov_write_count_io ...passed 00:03:20.185 Test: blob_thin_prov_unmap_cluster ...passed 00:03:20.444 Test: bs_load_iter_test ...passed 00:03:20.444 Test: blob_relations ...[2024-06-10 10:07:25.793233] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.444 [2024-06-10 10:07:25.793290] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.444 [2024-06-10 10:07:25.793377] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.444 [2024-06-10 10:07:25.793385] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.444 passed 00:03:20.444 Test: blob_relations2 ...[2024-06-10 10:07:25.803477] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.444 [2024-06-10 10:07:25.803501] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.444 [2024-06-10 10:07:25.803509] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.444 [2024-06-10 10:07:25.803514] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.444 [2024-06-10 10:07:25.803614] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.444 [2024-06-10 10:07:25.803622] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.444 [2024-06-10 10:07:25.803652] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.444 [2024-06-10 10:07:25.803659] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.444 passed 00:03:20.444 Test: blob_relations3 ...passed 00:03:20.444 Test: blobstore_clean_power_failure ...passed 00:03:20.444 Test: blob_delete_snapshot_power_failure ...[2024-06-10 10:07:25.935327] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:20.444 [2024-06-10 10:07:25.944979] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:20.445 [2024-06-10 10:07:25.954620] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:20.445 [2024-06-10 10:07:25.954664] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:20.445 [2024-06-10 10:07:25.954672] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.445 [2024-06-10 10:07:25.964267] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:20.445 [2024-06-10 10:07:25.964301] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:20.445 [2024-06-10 10:07:25.964308] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:20.445 [2024-06-10 10:07:25.964315] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.445 [2024-06-10 10:07:25.973932] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:20.445 [2024-06-10 10:07:25.973964] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:20.445 [2024-06-10 10:07:25.973970] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:20.445 [2024-06-10 10:07:25.973977] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.445 [2024-06-10 10:07:25.983657] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:20.445 [2024-06-10 10:07:25.983689] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.445 [2024-06-10 10:07:25.993375] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:20.445 [2024-06-10 10:07:25.993425] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.445 [2024-06-10 10:07:26.003641] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:20.445 [2024-06-10 10:07:26.003692] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.445 passed 00:03:20.445 Test: blob_create_snapshot_power_failure ...[2024-06-10 10:07:26.033154] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:20.704 [2024-06-10 10:07:26.043357] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:20.704 [2024-06-10 10:07:26.062439] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:20.704 [2024-06-10 10:07:26.072042] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:20.704 passed 00:03:20.704 Test: blob_io_unit ...passed 00:03:20.704 Test: blob_io_unit_compatibility ...passed 00:03:20.704 Test: blob_ext_md_pages ...passed 00:03:20.704 Test: blob_esnap_io_4096_4096 ...passed 00:03:20.704 Test: blob_esnap_io_512_512 ...passed 00:03:20.704 Test: blob_esnap_io_4096_512 ...passed 00:03:20.704 Test: blob_esnap_io_512_4096 ...passed 00:03:20.704 Test: blob_esnap_clone_resize ...passed 00:03:20.704 Suite: blob_bs_nocopy_extent 00:03:20.704 Test: blob_open ...passed 00:03:20.704 Test: blob_create ...[2024-06-10 10:07:26.276129] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:20.704 passed 00:03:20.963 Test: blob_create_loop ...passed 00:03:20.963 Test: blob_create_fail ...[2024-06-10 10:07:26.344759] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:20.963 passed 00:03:20.963 Test: blob_create_internal ...passed 00:03:20.963 Test: blob_create_zero_extent ...passed 00:03:20.963 Test: blob_snapshot ...passed 00:03:20.963 Test: blob_clone ...passed 00:03:20.963 Test: blob_inflate ...[2024-06-10 10:07:26.494850] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:20.963 passed 00:03:20.963 Test: blob_delete ...passed 00:03:20.963 Test: blob_resize_test ...[2024-06-10 10:07:26.550018] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:20.963 passed 00:03:21.223 Test: blob_resize_thin_test ...passed 00:03:21.223 Test: channel_ops ...passed 00:03:21.223 Test: blob_super ...passed 00:03:21.223 Test: blob_rw_verify_iov ...passed 00:03:21.223 Test: blob_unmap ...passed 00:03:21.223 Test: blob_iter ...passed 00:03:21.223 Test: blob_parse_md ...passed 00:03:21.223 Test: bs_load_pending_removal ...passed 00:03:21.482 Test: bs_unload ...[2024-06-10 10:07:26.821903] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:21.482 passed 00:03:21.482 Test: bs_usable_clusters ...passed 00:03:21.482 Test: blob_crc ...[2024-06-10 10:07:26.878695] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:21.482 [2024-06-10 10:07:26.878754] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:21.482 passed 00:03:21.482 Test: blob_flags ...passed 00:03:21.482 Test: bs_version ...passed 00:03:21.482 Test: blob_set_xattrs_test ...[2024-06-10 10:07:26.963400] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:21.482 [2024-06-10 10:07:26.963448] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:21.482 passed 00:03:21.482 Test: blob_thin_prov_alloc ...passed 00:03:21.482 Test: blob_insert_cluster_msg_test ...passed 00:03:21.482 Test: blob_thin_prov_rw ...passed 00:03:21.741 Test: blob_thin_prov_rle ...passed 00:03:21.741 Test: blob_thin_prov_rw_iov ...passed 00:03:21.741 Test: blob_snapshot_rw ...passed 00:03:21.741 Test: blob_snapshot_rw_iov ...passed 00:03:21.741 Test: blob_inflate_rw ...passed 00:03:21.741 Test: blob_snapshot_freeze_io ...passed 00:03:21.741 Test: blob_operation_split_rw ...passed 00:03:22.001 Test: blob_operation_split_rw_iov ...passed 00:03:22.001 Test: blob_simultaneous_operations ...[2024-06-10 10:07:27.377703] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.001 [2024-06-10 10:07:27.377757] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.001 [2024-06-10 10:07:27.377999] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.001 [2024-06-10 10:07:27.378013] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.001 [2024-06-10 10:07:27.381139] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.001 [2024-06-10 10:07:27.381161] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.001 [2024-06-10 10:07:27.381176] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.001 [2024-06-10 10:07:27.381182] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.001 passed 00:03:22.001 Test: blob_persist_test ...passed 00:03:22.001 Test: blob_decouple_snapshot ...passed 00:03:22.001 Test: blob_seek_io_unit ...passed 00:03:22.001 Test: blob_nested_freezes ...passed 00:03:22.001 Test: blob_clone_resize ...passed 00:03:22.001 Test: blob_shallow_copy ...[2024-06-10 10:07:27.568885] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:22.001 [2024-06-10 10:07:27.568944] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:22.001 [2024-06-10 10:07:27.568953] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:22.001 passed 00:03:22.001 Suite: blob_blob_nocopy_extent 00:03:22.259 Test: blob_write ...passed 00:03:22.259 Test: blob_read ...passed 00:03:22.259 Test: blob_rw_verify ...passed 00:03:22.259 Test: blob_rw_verify_iov_nomem ...passed 00:03:22.259 Test: blob_rw_iov_read_only ...passed 00:03:22.259 Test: blob_xattr ...passed 00:03:22.259 Test: blob_dirty_shutdown ...passed 00:03:22.259 Test: blob_is_degraded ...passed 00:03:22.259 Suite: blob_esnap_bs_nocopy_extent 00:03:22.259 Test: blob_esnap_create ...passed 00:03:22.518 Test: blob_esnap_thread_add_remove ...passed 00:03:22.518 Test: blob_esnap_clone_snapshot ...passed 00:03:22.518 Test: blob_esnap_clone_inflate ...passed 00:03:22.518 Test: blob_esnap_clone_decouple ...passed 00:03:22.518 Test: blob_esnap_clone_reload ...passed 00:03:22.518 Test: blob_esnap_hotplug ...passed 00:03:22.518 Test: blob_set_parent ...[2024-06-10 10:07:28.026885] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:22.518 [2024-06-10 10:07:28.026940] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:22.518 [2024-06-10 10:07:28.026957] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:22.518 [2024-06-10 10:07:28.026965] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:22.518 [2024-06-10 10:07:28.027008] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:22.518 passed 00:03:22.518 Test: blob_set_external_parent ...[2024-06-10 10:07:28.055323] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:22.518 [2024-06-10 10:07:28.055367] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:22.518 [2024-06-10 10:07:28.055375] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:22.518 [2024-06-10 10:07:28.055411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:22.518 passed 00:03:22.518 Suite: blob_copy_noextent 00:03:22.518 Test: blob_init ...[2024-06-10 10:07:28.064915] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:22.518 passed 00:03:22.518 Test: blob_thin_provision ...passed 00:03:22.518 Test: blob_read_only ...passed 00:03:22.519 Test: bs_load ...[2024-06-10 10:07:28.102971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:22.519 passed 00:03:22.519 Test: bs_load_custom_cluster_size ...passed 00:03:22.778 Test: bs_load_after_failed_grow ...passed 00:03:22.778 Test: bs_cluster_sz ...[2024-06-10 10:07:28.122036] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:22.778 [2024-06-10 10:07:28.122086] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:22.778 [2024-06-10 10:07:28.122098] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:22.778 passed 00:03:22.778 Test: bs_resize_md ...passed 00:03:22.778 Test: bs_destroy ...passed 00:03:22.778 Test: bs_type ...passed 00:03:22.778 Test: bs_super_block ...passed 00:03:22.778 Test: bs_test_recover_cluster_count ...passed 00:03:22.778 Test: bs_grow_live ...passed 00:03:22.778 Test: bs_grow_live_no_space ...passed 00:03:22.778 Test: bs_test_grow ...passed 00:03:22.778 Test: blob_serialize_test ...passed 00:03:22.778 Test: super_block_crc ...passed 00:03:22.778 Test: blob_thin_prov_write_count_io ...passed 00:03:22.778 Test: blob_thin_prov_unmap_cluster ...passed 00:03:22.778 Test: bs_load_iter_test ...passed 00:03:22.778 Test: blob_relations ...[2024-06-10 10:07:28.258563] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:22.778 [2024-06-10 10:07:28.258614] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.778 [2024-06-10 10:07:28.258674] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:22.778 [2024-06-10 10:07:28.258681] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.778 passed 00:03:22.778 Test: blob_relations2 ...[2024-06-10 10:07:28.268534] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:22.778 [2024-06-10 10:07:28.268568] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.778 [2024-06-10 10:07:28.268592] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:22.778 [2024-06-10 10:07:28.268598] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.778 [2024-06-10 10:07:28.268678] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:22.778 [2024-06-10 10:07:28.268686] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.778 [2024-06-10 10:07:28.268713] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:22.778 [2024-06-10 10:07:28.268720] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.778 passed 00:03:22.778 Test: blob_relations3 ...passed 00:03:23.037 Test: blobstore_clean_power_failure ...passed 00:03:23.037 Test: blob_delete_snapshot_power_failure ...[2024-06-10 10:07:28.400647] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:23.037 [2024-06-10 10:07:28.410159] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:23.037 [2024-06-10 10:07:28.410190] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.037 [2024-06-10 10:07:28.410197] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.037 [2024-06-10 10:07:28.419717] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:23.037 [2024-06-10 10:07:28.419742] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:23.037 [2024-06-10 10:07:28.419754] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.037 [2024-06-10 10:07:28.419761] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.037 [2024-06-10 10:07:28.429260] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:23.037 [2024-06-10 10:07:28.429277] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.037 [2024-06-10 10:07:28.438750] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:23.037 [2024-06-10 10:07:28.438782] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.037 [2024-06-10 10:07:28.448380] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:23.037 [2024-06-10 10:07:28.448405] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.037 passed 00:03:23.037 Test: blob_create_snapshot_power_failure ...[2024-06-10 10:07:28.476757] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:23.037 [2024-06-10 10:07:28.495619] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:23.037 [2024-06-10 10:07:28.505192] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:23.037 passed 00:03:23.037 Test: blob_io_unit ...passed 00:03:23.037 Test: blob_io_unit_compatibility ...passed 00:03:23.037 Test: blob_ext_md_pages ...passed 00:03:23.037 Test: blob_esnap_io_4096_4096 ...passed 00:03:23.037 Test: blob_esnap_io_512_512 ...passed 00:03:23.037 Test: blob_esnap_io_4096_512 ...passed 00:03:23.295 Test: blob_esnap_io_512_4096 ...passed 00:03:23.295 Test: blob_esnap_clone_resize ...passed 00:03:23.295 Suite: blob_bs_copy_noextent 00:03:23.295 Test: blob_open ...passed 00:03:23.295 Test: blob_create ...[2024-06-10 10:07:28.704716] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:23.295 passed 00:03:23.295 Test: blob_create_loop ...passed 00:03:23.295 Test: blob_create_fail ...[2024-06-10 10:07:28.771370] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:23.295 passed 00:03:23.295 Test: blob_create_internal ...passed 00:03:23.295 Test: blob_create_zero_extent ...passed 00:03:23.295 Test: blob_snapshot ...passed 00:03:23.295 Test: blob_clone ...passed 00:03:23.553 Test: blob_inflate ...[2024-06-10 10:07:28.913736] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:23.553 passed 00:03:23.553 Test: blob_delete ...passed 00:03:23.553 Test: blob_resize_test ...[2024-06-10 10:07:28.967939] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:23.553 passed 00:03:23.553 Test: blob_resize_thin_test ...passed 00:03:23.553 Test: channel_ops ...passed 00:03:23.553 Test: blob_super ...passed 00:03:23.553 Test: blob_rw_verify_iov ...passed 00:03:23.553 Test: blob_unmap ...passed 00:03:23.553 Test: blob_iter ...passed 00:03:23.811 Test: blob_parse_md ...passed 00:03:23.811 Test: bs_load_pending_removal ...passed 00:03:23.811 Test: bs_unload ...[2024-06-10 10:07:29.219111] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:23.811 passed 00:03:23.811 Test: bs_usable_clusters ...passed 00:03:23.811 Test: blob_crc ...[2024-06-10 10:07:29.274582] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:23.811 [2024-06-10 10:07:29.274632] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:23.811 passed 00:03:23.811 Test: blob_flags ...passed 00:03:23.811 Test: bs_version ...passed 00:03:23.811 Test: blob_set_xattrs_test ...[2024-06-10 10:07:29.358302] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:23.811 [2024-06-10 10:07:29.358349] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:23.811 passed 00:03:23.811 Test: blob_thin_prov_alloc ...passed 00:03:24.068 Test: blob_insert_cluster_msg_test ...passed 00:03:24.068 Test: blob_thin_prov_rw ...passed 00:03:24.068 Test: blob_thin_prov_rle ...passed 00:03:24.068 Test: blob_thin_prov_rw_iov ...passed 00:03:24.068 Test: blob_snapshot_rw ...passed 00:03:24.068 Test: blob_snapshot_rw_iov ...passed 00:03:24.068 Test: blob_inflate_rw ...passed 00:03:24.068 Test: blob_snapshot_freeze_io ...passed 00:03:24.325 Test: blob_operation_split_rw ...passed 00:03:24.325 Test: blob_operation_split_rw_iov ...passed 00:03:24.326 Test: blob_simultaneous_operations ...[2024-06-10 10:07:29.770508] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:24.326 [2024-06-10 10:07:29.770566] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.326 [2024-06-10 10:07:29.770800] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:24.326 [2024-06-10 10:07:29.770812] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.326 [2024-06-10 10:07:29.772798] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:24.326 [2024-06-10 10:07:29.772819] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.326 [2024-06-10 10:07:29.772834] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:24.326 [2024-06-10 10:07:29.772841] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:24.326 passed 00:03:24.326 Test: blob_persist_test ...passed 00:03:24.326 Test: blob_decouple_snapshot ...passed 00:03:24.326 Test: blob_seek_io_unit ...passed 00:03:24.326 Test: blob_nested_freezes ...passed 00:03:24.583 Test: blob_clone_resize ...passed 00:03:24.583 Test: blob_shallow_copy ...[2024-06-10 10:07:29.955181] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:24.584 [2024-06-10 10:07:29.955237] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:24.584 [2024-06-10 10:07:29.955244] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:24.584 passed 00:03:24.584 Suite: blob_blob_copy_noextent 00:03:24.584 Test: blob_write ...passed 00:03:24.584 Test: blob_read ...passed 00:03:24.584 Test: blob_rw_verify ...passed 00:03:24.584 Test: blob_rw_verify_iov_nomem ...passed 00:03:24.584 Test: blob_rw_iov_read_only ...passed 00:03:24.584 Test: blob_xattr ...passed 00:03:24.584 Test: blob_dirty_shutdown ...passed 00:03:24.842 Test: blob_is_degraded ...passed 00:03:24.842 Suite: blob_esnap_bs_copy_noextent 00:03:24.842 Test: blob_esnap_create ...passed 00:03:24.842 Test: blob_esnap_thread_add_remove ...passed 00:03:24.842 Test: blob_esnap_clone_snapshot ...passed 00:03:24.842 Test: blob_esnap_clone_inflate ...passed 00:03:24.842 Test: blob_esnap_clone_decouple ...passed 00:03:24.842 Test: blob_esnap_clone_reload ...passed 00:03:24.842 Test: blob_esnap_hotplug ...passed 00:03:24.842 Test: blob_set_parent ...[2024-06-10 10:07:30.409072] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:24.842 [2024-06-10 10:07:30.409129] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:24.842 [2024-06-10 10:07:30.409145] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:24.842 [2024-06-10 10:07:30.409153] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:24.842 [2024-06-10 10:07:30.409192] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:24.842 passed 00:03:24.842 Test: blob_set_external_parent ...[2024-06-10 10:07:30.436933] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:24.842 [2024-06-10 10:07:30.436971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:24.842 [2024-06-10 10:07:30.436979] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:24.842 [2024-06-10 10:07:30.437013] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:25.101 passed 00:03:25.101 Suite: blob_copy_extent 00:03:25.101 Test: blob_init ...[2024-06-10 10:07:30.446505] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:25.101 passed 00:03:25.101 Test: blob_thin_provision ...passed 00:03:25.101 Test: blob_read_only ...passed 00:03:25.101 Test: bs_load ...[2024-06-10 10:07:30.483793] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:25.101 passed 00:03:25.101 Test: bs_load_custom_cluster_size ...passed 00:03:25.101 Test: bs_load_after_failed_grow ...passed 00:03:25.101 Test: bs_cluster_sz ...[2024-06-10 10:07:30.502503] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:25.101 [2024-06-10 10:07:30.502550] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:25.101 [2024-06-10 10:07:30.502560] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:25.101 passed 00:03:25.101 Test: bs_resize_md ...passed 00:03:25.101 Test: bs_destroy ...passed 00:03:25.101 Test: bs_type ...passed 00:03:25.101 Test: bs_super_block ...passed 00:03:25.101 Test: bs_test_recover_cluster_count ...passed 00:03:25.101 Test: bs_grow_live ...passed 00:03:25.101 Test: bs_grow_live_no_space ...passed 00:03:25.101 Test: bs_test_grow ...passed 00:03:25.101 Test: blob_serialize_test ...passed 00:03:25.101 Test: super_block_crc ...passed 00:03:25.101 Test: blob_thin_prov_write_count_io ...passed 00:03:25.101 Test: blob_thin_prov_unmap_cluster ...passed 00:03:25.101 Test: bs_load_iter_test ...passed 00:03:25.101 Test: blob_relations ...[2024-06-10 10:07:30.637335] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:25.101 [2024-06-10 10:07:30.637383] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.101 [2024-06-10 10:07:30.637449] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:25.101 [2024-06-10 10:07:30.637455] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.101 passed 00:03:25.101 Test: blob_relations2 ...[2024-06-10 10:07:30.647339] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:25.101 [2024-06-10 10:07:30.647360] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.101 [2024-06-10 10:07:30.647368] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:25.101 [2024-06-10 10:07:30.647373] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.101 [2024-06-10 10:07:30.647461] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:25.101 [2024-06-10 10:07:30.647469] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.101 [2024-06-10 10:07:30.647499] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:25.101 [2024-06-10 10:07:30.647505] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.101 passed 00:03:25.101 Test: blob_relations3 ...passed 00:03:25.359 Test: blobstore_clean_power_failure ...passed 00:03:25.359 Test: blob_delete_snapshot_power_failure ...[2024-06-10 10:07:30.778108] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:25.359 [2024-06-10 10:07:30.787744] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:25.359 [2024-06-10 10:07:30.797200] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:25.359 [2024-06-10 10:07:30.797233] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:25.359 [2024-06-10 10:07:30.797240] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.359 [2024-06-10 10:07:30.806781] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:25.359 [2024-06-10 10:07:30.806806] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:25.359 [2024-06-10 10:07:30.806814] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:25.359 [2024-06-10 10:07:30.806829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.359 [2024-06-10 10:07:30.816385] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:25.359 [2024-06-10 10:07:30.816411] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:25.359 [2024-06-10 10:07:30.816418] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:25.359 [2024-06-10 10:07:30.816425] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.359 [2024-06-10 10:07:30.825949] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:25.359 [2024-06-10 10:07:30.825969] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.359 [2024-06-10 10:07:30.835547] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:25.359 [2024-06-10 10:07:30.835570] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.359 [2024-06-10 10:07:30.845107] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:25.359 [2024-06-10 10:07:30.845127] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.359 passed 00:03:25.359 Test: blob_create_snapshot_power_failure ...[2024-06-10 10:07:30.873605] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:25.359 [2024-06-10 10:07:30.883155] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:25.359 [2024-06-10 10:07:30.902217] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:25.359 [2024-06-10 10:07:30.911778] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:25.359 passed 00:03:25.359 Test: blob_io_unit ...passed 00:03:25.617 Test: blob_io_unit_compatibility ...passed 00:03:25.617 Test: blob_ext_md_pages ...passed 00:03:25.617 Test: blob_esnap_io_4096_4096 ...passed 00:03:25.617 Test: blob_esnap_io_512_512 ...passed 00:03:25.617 Test: blob_esnap_io_4096_512 ...passed 00:03:25.617 Test: blob_esnap_io_512_4096 ...passed 00:03:25.617 Test: blob_esnap_clone_resize ...passed 00:03:25.617 Suite: blob_bs_copy_extent 00:03:25.617 Test: blob_open ...passed 00:03:25.617 Test: blob_create ...[2024-06-10 10:07:31.110398] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:25.617 passed 00:03:25.617 Test: blob_create_loop ...passed 00:03:25.617 Test: blob_create_fail ...[2024-06-10 10:07:31.177938] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:25.617 passed 00:03:25.875 Test: blob_create_internal ...passed 00:03:25.875 Test: blob_create_zero_extent ...passed 00:03:25.875 Test: blob_snapshot ...passed 00:03:25.875 Test: blob_clone ...passed 00:03:25.875 Test: blob_inflate ...[2024-06-10 10:07:31.323096] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:25.875 passed 00:03:25.875 Test: blob_delete ...passed 00:03:25.875 Test: blob_resize_test ...[2024-06-10 10:07:31.378334] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:25.875 passed 00:03:25.875 Test: blob_resize_thin_test ...passed 00:03:25.875 Test: channel_ops ...passed 00:03:25.875 Test: blob_super ...passed 00:03:26.162 Test: blob_rw_verify_iov ...passed 00:03:26.162 Test: blob_unmap ...passed 00:03:26.162 Test: blob_iter ...passed 00:03:26.162 Test: blob_parse_md ...passed 00:03:26.162 Test: bs_load_pending_removal ...passed 00:03:26.162 Test: bs_unload ...[2024-06-10 10:07:31.630671] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:26.162 passed 00:03:26.162 Test: bs_usable_clusters ...passed 00:03:26.162 Test: blob_crc ...[2024-06-10 10:07:31.687397] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:26.162 [2024-06-10 10:07:31.687449] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:26.162 passed 00:03:26.162 Test: blob_flags ...passed 00:03:26.162 Test: bs_version ...passed 00:03:26.421 Test: blob_set_xattrs_test ...[2024-06-10 10:07:31.772407] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:26.421 [2024-06-10 10:07:31.772465] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:26.421 passed 00:03:26.421 Test: blob_thin_prov_alloc ...passed 00:03:26.421 Test: blob_insert_cluster_msg_test ...passed 00:03:26.421 Test: blob_thin_prov_rw ...passed 00:03:26.421 Test: blob_thin_prov_rle ...passed 00:03:26.421 Test: blob_thin_prov_rw_iov ...passed 00:03:26.421 Test: blob_snapshot_rw ...passed 00:03:26.421 Test: blob_snapshot_rw_iov ...passed 00:03:26.681 Test: blob_inflate_rw ...passed 00:03:26.681 Test: blob_snapshot_freeze_io ...passed 00:03:26.681 Test: blob_operation_split_rw ...passed 00:03:26.681 Test: blob_operation_split_rw_iov ...passed 00:03:26.681 Test: blob_simultaneous_operations ...[2024-06-10 10:07:32.185831] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.681 [2024-06-10 10:07:32.185886] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.681 [2024-06-10 10:07:32.186125] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.681 [2024-06-10 10:07:32.186138] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.681 [2024-06-10 10:07:32.188070] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.681 [2024-06-10 10:07:32.188092] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.681 [2024-06-10 10:07:32.188106] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:26.681 [2024-06-10 10:07:32.188112] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:26.681 passed 00:03:26.681 Test: blob_persist_test ...passed 00:03:26.681 Test: blob_decouple_snapshot ...passed 00:03:26.939 Test: blob_seek_io_unit ...passed 00:03:26.939 Test: blob_nested_freezes ...passed 00:03:26.939 Test: blob_clone_resize ...passed 00:03:26.939 Test: blob_shallow_copy ...[2024-06-10 10:07:32.371782] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:26.939 [2024-06-10 10:07:32.371843] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:26.939 [2024-06-10 10:07:32.371852] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:26.939 passed 00:03:26.939 Suite: blob_blob_copy_extent 00:03:26.939 Test: blob_write ...passed 00:03:26.939 Test: blob_read ...passed 00:03:26.939 Test: blob_rw_verify ...passed 00:03:26.939 Test: blob_rw_verify_iov_nomem ...passed 00:03:26.939 Test: blob_rw_iov_read_only ...passed 00:03:27.198 Test: blob_xattr ...passed 00:03:27.198 Test: blob_dirty_shutdown ...passed 00:03:27.198 Test: blob_is_degraded ...passed 00:03:27.198 Suite: blob_esnap_bs_copy_extent 00:03:27.198 Test: blob_esnap_create ...passed 00:03:27.198 Test: blob_esnap_thread_add_remove ...passed 00:03:27.198 Test: blob_esnap_clone_snapshot ...passed 00:03:27.198 Test: blob_esnap_clone_inflate ...passed 00:03:27.198 Test: blob_esnap_clone_decouple ...passed 00:03:27.198 Test: blob_esnap_clone_reload ...passed 00:03:27.458 Test: blob_esnap_hotplug ...passed 00:03:27.458 Test: blob_set_parent ...[2024-06-10 10:07:32.824597] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:27.459 [2024-06-10 10:07:32.824653] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:27.459 [2024-06-10 10:07:32.824669] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:27.459 [2024-06-10 10:07:32.824695] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:27.459 [2024-06-10 10:07:32.824903] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:27.459 passed 00:03:27.459 Test: blob_set_external_parent ...[2024-06-10 10:07:32.853170] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:27.459 [2024-06-10 10:07:32.853212] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:27.459 [2024-06-10 10:07:32.853219] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:27.459 [2024-06-10 10:07:32.853257] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:27.459 passed 00:03:27.459 00:03:27.459 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.459 suites 16 16 n/a 0 0 00:03:27.459 tests 376 376 376 0 0 00:03:27.459 asserts 143965 143965 143965 0 n/a 00:03:27.459 00:03:27.459 Elapsed time = 9.805 seconds 00:03:27.459 10:07:32 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:27.459 00:03:27.459 00:03:27.459 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.459 http://cunit.sourceforge.net/ 00:03:27.459 00:03:27.459 00:03:27.459 Suite: blob_bdev 00:03:27.459 Test: create_bs_dev ...passed 00:03:27.459 Test: create_bs_dev_ro ...[2024-06-10 10:07:32.874803] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:27.459 passed 00:03:27.459 Test: create_bs_dev_rw ...passed 00:03:27.459 Test: claim_bs_dev ...passed 00:03:27.459 Test: claim_bs_dev_ro ...passed 00:03:27.459 Test: deferred_destroy_refs ...passed 00:03:27.459 Test: deferred_destroy_channels ...passed 00:03:27.459 Test: deferred_destroy_threads ...passed 00:03:27.459 00:03:27.459 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.459 suites 1 1 n/a 0 0 00:03:27.459 tests 8 8 8 0 0 00:03:27.459 asserts 119 119 119 0 n/a 00:03:27.459 00:03:27.459 Elapsed time = 0.000 seconds 00:03:27.459 [2024-06-10 10:07:32.875142] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:27.459 10:07:32 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:27.459 00:03:27.459 00:03:27.459 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.459 http://cunit.sourceforge.net/ 00:03:27.459 00:03:27.459 00:03:27.459 Suite: tree 00:03:27.459 Test: blobfs_tree_op_test ...passed 00:03:27.459 00:03:27.459 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.459 suites 1 1 n/a 0 0 00:03:27.459 tests 1 1 1 0 0 00:03:27.459 asserts 27 27 27 0 n/a 00:03:27.459 00:03:27.459 Elapsed time = 0.000 seconds 00:03:27.459 10:07:32 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:27.459 00:03:27.459 00:03:27.459 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.459 http://cunit.sourceforge.net/ 00:03:27.459 00:03:27.459 00:03:27.459 Suite: blobfs_async_ut 00:03:27.459 Test: fs_init ...passed 00:03:27.459 Test: fs_open ...passed 00:03:27.459 Test: fs_create ...passed 00:03:27.459 Test: fs_truncate ...passed 00:03:27.459 Test: fs_rename ...[2024-06-10 10:07:32.977552] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:27.459 passed 00:03:27.459 Test: fs_rw_async ...passed 00:03:27.459 Test: fs_writev_readv_async ...passed 00:03:27.459 Test: tree_find_buffer_ut ...passed 00:03:27.459 Test: channel_ops ...passed 00:03:27.459 Test: channel_ops_sync ...passed 00:03:27.459 00:03:27.459 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.459 suites 1 1 n/a 0 0 00:03:27.459 tests 10 10 10 0 0 00:03:27.459 asserts 292 292 292 0 n/a 00:03:27.459 00:03:27.459 Elapsed time = 0.133 seconds 00:03:27.459 10:07:33 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:27.459 00:03:27.459 00:03:27.459 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.459 http://cunit.sourceforge.net/ 00:03:27.459 00:03:27.459 00:03:27.459 Suite: blobfs_sync_ut 00:03:27.719 Test: cache_read_after_write ...[2024-06-10 10:07:33.078018] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:27.719 passed 00:03:27.719 Test: file_length ...passed 00:03:27.719 Test: append_write_to_extend_blob ...passed 00:03:27.719 Test: partial_buffer ...passed 00:03:27.719 Test: cache_write_null_buffer ...passed 00:03:27.719 Test: fs_create_sync ...passed 00:03:27.719 Test: fs_rename_sync ...passed 00:03:27.719 Test: cache_append_no_cache ...passed 00:03:27.719 Test: fs_delete_file_without_close ...passed 00:03:27.719 00:03:27.719 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.719 suites 1 1 n/a 0 0 00:03:27.719 tests 9 9 9 0 0 00:03:27.719 asserts 345 345 345 0 n/a 00:03:27.719 00:03:27.719 Elapsed time = 0.258 seconds 00:03:27.719 10:07:33 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:27.719 00:03:27.719 00:03:27.719 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.719 http://cunit.sourceforge.net/ 00:03:27.719 00:03:27.719 00:03:27.719 Suite: blobfs_bdev_ut 00:03:27.719 Test: spdk_blobfs_bdev_detect_test ...passed 00:03:27.719 Test: spdk_blobfs_bdev_create_test ...passed 00:03:27.719 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:27.719 00:03:27.719 [2024-06-10 10:07:33.177025] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:27.719 [2024-06-10 10:07:33.177309] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:27.719 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.719 suites 1 1 n/a 0 0 00:03:27.719 tests 3 3 3 0 0 00:03:27.719 asserts 9 9 9 0 n/a 00:03:27.719 00:03:27.719 Elapsed time = 0.000 seconds 00:03:27.719 00:03:27.719 real 0m10.139s 00:03:27.719 user 0m10.087s 00:03:27.719 sys 0m0.184s 00:03:27.719 ************************************ 00:03:27.719 END TEST unittest_blob_blobfs 00:03:27.719 ************************************ 00:03:27.719 10:07:33 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.719 10:07:33 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:03:27.719 10:07:33 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:03:27.719 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.719 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.719 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.719 ************************************ 00:03:27.719 START TEST unittest_event 00:03:27.719 ************************************ 00:03:27.719 10:07:33 unittest.unittest_event -- common/autotest_common.sh@1124 -- # unittest_event 00:03:27.719 10:07:33 unittest.unittest_event -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:27.719 00:03:27.719 00:03:27.719 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.719 http://cunit.sourceforge.net/ 00:03:27.719 00:03:27.719 00:03:27.719 Suite: app_suite 00:03:27.719 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:27.719 00:03:27.719 CPU options: 00:03:27.719 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:27.719 (like [0,1,10]) 00:03:27.719 --lcores lcore to CPU mapping list. The list is in the format: 00:03:27.719 [<,lcores[@CPUs]>...] 00:03:27.719 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:27.719 Within the group, '-' is used for range separator, 00:03:27.719 ',' is used for single number separator. 00:03:27.719 '( )' can be omitted for single element group, 00:03:27.719 '@' can be omitted if cpus and lcores have the same value 00:03:27.719 --disable-cpumask-locks Disable CPU core lock files. 00:03:27.719 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:27.719 pollers in the app support interrupt mode) 00:03:27.719 -p, --main-core main (primary) core for DPDK 00:03:27.719 00:03:27.719 Configuration options: 00:03:27.719 -c, --config, --json JSON config file 00:03:27.719 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:27.719 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:27.719 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:27.719 --rpcs-allowed comma-separated list of permitted RPCS 00:03:27.719 --json-ignore-init-errors don't exit on invalid config entry 00:03:27.719 00:03:27.719 Memory options: 00:03:27.719 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:27.719 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:27.720 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:27.720 -R, --huge-unlink unlink huge files after initialization 00:03:27.720 -n, --mem-channels number of memory channels used for DPDK 00:03:27.720 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:27.720 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:27.720 --no-huge run without using hugepages 00:03:27.720 -i, --shm-id shared memory ID (optional) 00:03:27.720 -g, --single-file-segments force creating just one hugetlbfs file 00:03:27.720 00:03:27.720 PCI options: 00:03:27.720 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:27.720 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:27.720 -u, --no-pci disable PCI access 00:03:27.720 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:27.720 00:03:27.720 Log options: 00:03:27.720 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:27.720 --silence-noticelog disable notice level logging to stderr 00:03:27.720 00:03:27.720 Trace options: 00:03:27.720 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:27.720 setting 0 to disable trace (default 32768) 00:03:27.720 Tracepoints vary in size and can use more than one trace entry. 00:03:27.720 -e, --tpoint-group [:] 00:03:27.720 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:27.720 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:27.720 a tracepoint group. First tpoint inside a group can be enabled by 00:03:27.720 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:27.720 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:27.720 in /include/spdk_internal/trace_defs.h 00:03:27.720 00:03:27.720 Other options: 00:03:27.720 -h, --help show this usage 00:03:27.720 -v, --version print SPDK version 00:03:27.720 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:27.720 --env-context Opaque context for use of the env implementation 00:03:27.720 app_ut [options] 00:03:27.720 00:03:27.720 CPU options: 00:03:27.720 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:27.720 (like [0,1,10]) 00:03:27.720 --lcores lcore to CPU mapping list. The list is in the format: 00:03:27.720 [<,lcores[@CPUs]>...] 00:03:27.720 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:27.720 Within the group, '-' is used for range separator, 00:03:27.720 ',' is used for single number separator. 00:03:27.720 '( )' can be omitted for single element group, 00:03:27.720 '@' can be omitted if cpus and lcores have the same value 00:03:27.720 --disable-cpumask-locks Disable CPU core lock files. 00:03:27.720 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:27.720 pollers in the app support interrupt mode) 00:03:27.720 -p, --main-core main (primary) core for DPDK 00:03:27.720 00:03:27.720 Configuration options: 00:03:27.720 -c, --config, --json JSON config file 00:03:27.720 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:27.720 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:27.720 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:27.720 --rpcs-allowed comma-separated list of permitted RPCS 00:03:27.720 --json-ignore-init-errors don't exit on invalid config entry 00:03:27.720 00:03:27.720 Memory options: 00:03:27.720 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:27.720 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:27.720 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:27.720 -R, --huge-unlink unlink huge files after initialization 00:03:27.720 -n, --mem-channels number of memory channels used for DPDK 00:03:27.720 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:27.720 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:27.720 --no-huge run without using hugepages 00:03:27.720 -i, --shm-id shared memory ID (optional) 00:03:27.720 -g, --single-file-segments force creating just one hugetlbfs file 00:03:27.720 00:03:27.720 PCI options: 00:03:27.720 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:27.720 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:27.720 -u, --no-pci disable PCI access 00:03:27.720 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:27.720 00:03:27.720 Log options: 00:03:27.720 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:27.720 --silence-noticelog disable notice level logging to stderr 00:03:27.720 00:03:27.720 Trace options: 00:03:27.720 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:27.720 setting 0 to disable trace (default 32768) 00:03:27.720 Tracepoints vary in size and can use more than one trace entry. 00:03:27.720 -e, --tpoint-group [:] 00:03:27.720 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:27.720 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:27.720 a tracepoint group. First tpoint inside a group can be enabled by 00:03:27.720 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:27.720 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:27.720 in /include/spdk_internal/trace_defs.h 00:03:27.720 00:03:27.720 Other options: 00:03:27.720 -h, --help show this usage 00:03:27.720 -v, --version print SPDK version 00:03:27.720 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:27.720 --env-context Opaque context for use of the env implementation 00:03:27.720 app_ut: invalid option -- z 00:03:27.720 app_ut: unrecognized option `--test-long-opt' 00:03:27.720 app_ut [options] 00:03:27.720 00:03:27.720 CPU options: 00:03:27.720 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:27.720 (like [0,1,10]) 00:03:27.720 --lcores lcore to CPU mapping list. The list is in the format: 00:03:27.720 [<,lcores[@CPUs]>...] 00:03:27.720 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:27.720 Within the group, '-' is used for range separator, 00:03:27.720 ',' is used for single number separator. 00:03:27.720 '( )' can be omitted for single element group, 00:03:27.720 '@' can be omitted if cpus and lcores have the same value 00:03:27.720 --disable-cpumask-locks Disable CPU core lock files. 00:03:27.720 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:27.720 pollers in the app support interrupt mode) 00:03:27.720 -p, --main-core main (primary) core for DPDK 00:03:27.720 00:03:27.720 Configuration options: 00:03:27.720 -c, --config, --json JSON config file 00:03:27.720 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:27.720 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:27.720 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:27.720 --rpcs-allowed comma-separated list of permitted RPCS 00:03:27.720 --json-ignore-init-errors don't exit on invalid config entry 00:03:27.720 00:03:27.720 Memory options: 00:03:27.720 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:27.720 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:27.720 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:27.720 -R, --huge-unlink unlink huge files after initialization 00:03:27.720 -n, --mem-channels number of memory channels used for DPDK 00:03:27.720 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:27.720 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:27.720 --no-huge run without using hugepages 00:03:27.720 -i, --shm-id shared memory ID (optional) 00:03:27.720 -g, --single-file-segments force creating just one hugetlbfs file 00:03:27.720 00:03:27.720 PCI options: 00:03:27.720 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:27.720 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:27.720 -u, --no-pci disable PCI access 00:03:27.720 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:27.720 00:03:27.720 Log options: 00:03:27.720 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:27.720 --silence-noticelog disable notice level logging to stderr 00:03:27.720 00:03:27.720 Trace options: 00:03:27.720 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:27.720 setting 0 to disable trace (default 32768) 00:03:27.720 Tracepoints vary in size and can use more than one trace entry. 00:03:27.720 -e, --tpoint-group [:] 00:03:27.720 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:27.720 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:27.720 a tracepoint group. First tpoint inside a group can be enabled by 00:03:27.720 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:27.720 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:27.720 in /include/spdk_internal/trace_defs.h 00:03:27.720 00:03:27.720 Other options: 00:03:27.720 -h, --help show this usage 00:03:27.721 -v, --version print SPDK version 00:03:27.721 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:27.721 --env-context Opaque context for use of the env implementation 00:03:27.721 passed 00:03:27.721 00:03:27.721 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.721 suites 1 1 n/a 0 0 00:03:27.721 tests 1 1 1 0 0 00:03:27.721 asserts 8 8 8 0 n/a 00:03:27.721 00:03:27.721 Elapsed time = 0.000 seconds 00:03:27.721 [2024-06-10 10:07:33.217430] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:27.721 [2024-06-10 10:07:33.217665] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:27.721 [2024-06-10 10:07:33.217766] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:27.721 10:07:33 unittest.unittest_event -- unit/unittest.sh@52 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:27.721 00:03:27.721 00:03:27.721 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.721 http://cunit.sourceforge.net/ 00:03:27.721 00:03:27.721 00:03:27.721 Suite: app_suite 00:03:27.721 Test: test_create_reactor ...passed 00:03:27.721 Test: test_init_reactors ...passed 00:03:27.721 Test: test_event_call ...passed 00:03:27.721 Test: test_schedule_thread ...passed 00:03:27.721 Test: test_reschedule_thread ...passed 00:03:27.721 Test: test_bind_thread ...passed 00:03:27.721 Test: test_for_each_reactor ...passed 00:03:27.721 Test: test_reactor_stats ...passed 00:03:27.721 Test: test_scheduler ...passed 00:03:27.721 Test: test_governor ...passed 00:03:27.721 00:03:27.721 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.721 suites 1 1 n/a 0 0 00:03:27.721 tests 10 10 10 0 0 00:03:27.721 asserts 336 336 336 0 n/a 00:03:27.721 00:03:27.721 Elapsed time = 0.008 seconds 00:03:27.721 00:03:27.721 real 0m0.017s 00:03:27.721 user 0m0.009s 00:03:27.721 sys 0m0.010s 00:03:27.721 10:07:33 unittest.unittest_event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.721 ************************************ 00:03:27.721 END TEST unittest_event 00:03:27.721 ************************************ 00:03:27.721 10:07:33 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:03:27.721 10:07:33 unittest -- unit/unittest.sh@235 -- # uname -s 00:03:27.721 10:07:33 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:03:27.721 10:07:33 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:27.721 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.721 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.721 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.721 ************************************ 00:03:27.721 START TEST unittest_accel 00:03:27.721 ************************************ 00:03:27.721 10:07:33 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:27.721 00:03:27.721 00:03:27.721 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.721 http://cunit.sourceforge.net/ 00:03:27.721 00:03:27.721 00:03:27.721 Suite: accel_sequence 00:03:27.721 Test: test_sequence_fill_copy ...passed 00:03:27.721 Test: test_sequence_abort ...passed 00:03:27.721 Test: test_sequence_append_error ...passed 00:03:27.721 Test: test_sequence_completion_error ...[2024-06-10 10:07:33.284806] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1932:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82e866240 00:03:27.721 passed 00:03:27.721 Test: test_sequence_decompress ...[2024-06-10 10:07:33.285077] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1932:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82e866240 00:03:27.721 [2024-06-10 10:07:33.285100] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1842:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82e866240 00:03:27.721 [2024-06-10 10:07:33.285120] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1842:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82e866240 00:03:27.721 passed 00:03:27.721 Test: test_sequence_reverse ...passed 00:03:27.721 Test: test_sequence_copy_elision ...passed 00:03:27.721 Test: test_sequence_accel_buffers ...passed 00:03:27.721 Test: test_sequence_memory_domain ...passed 00:03:27.721 Test: test_sequence_module_memory_domain ...passed 00:03:27.721 Test: test_sequence_crypto ...[2024-06-10 10:07:33.286652] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1734:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:27.721 [2024-06-10 10:07:33.286705] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1773:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:27.721 passed 00:03:27.721 Test: test_sequence_driver ...passed 00:03:27.721 Test: test_sequence_same_iovs ...[2024-06-10 10:07:33.287540] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1881:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82e866940 using driver: ut 00:03:27.721 [2024-06-10 10:07:33.287573] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82e866940 through driver: ut 00:03:27.721 passed 00:03:27.721 Test: test_sequence_crc32 ...passed 00:03:27.721 Suite: accel 00:03:27.721 Test: test_spdk_accel_task_complete ...passed 00:03:27.721 Test: test_get_task ...passed 00:03:27.721 Test: test_spdk_accel_submit_copy ...passed 00:03:27.721 Test: test_spdk_accel_submit_dualcast ...passed 00:03:27.721 Test: test_spdk_accel_submit_compare ...passed 00:03:27.721 Test: test_spdk_accel_submit_fill ...passed 00:03:27.721 Test: test_spdk_accel_submit_crc32c ...passed 00:03:27.721 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:27.721 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:27.721 Test: test_spdk_accel_submit_xor ...passed 00:03:27.721 Test: test_spdk_accel_module_find_by_name ...passed 00:03:27.721 Test: test_spdk_accel_module_register ...passed 00:03:27.721 00:03:27.721 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.721 suites 2 2 n/a 0 0 00:03:27.721 tests 26 26 26 0 0 00:03:27.721 asserts 830 830 830 0 n/a 00:03:27.721 00:03:27.721 Elapsed time = 0.008 seconds 00:03:27.721 [2024-06-10 10:07:33.288235] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:27.721 [2024-06-10 10:07:33.288257] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:27.721 00:03:27.721 real 0m0.013s 00:03:27.721 user 0m0.010s 00:03:27.721 sys 0m0.008s 00:03:27.721 10:07:33 unittest.unittest_accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.721 10:07:33 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:03:27.721 ************************************ 00:03:27.721 END TEST unittest_accel 00:03:27.721 ************************************ 00:03:27.981 10:07:33 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.981 ************************************ 00:03:27.981 START TEST unittest_ioat 00:03:27.981 ************************************ 00:03:27.981 10:07:33 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:27.981 00:03:27.981 00:03:27.981 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.981 http://cunit.sourceforge.net/ 00:03:27.981 00:03:27.981 00:03:27.981 Suite: ioat 00:03:27.981 Test: ioat_state_check ...passed 00:03:27.981 00:03:27.981 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.981 suites 1 1 n/a 0 0 00:03:27.981 tests 1 1 1 0 0 00:03:27.981 asserts 32 32 32 0 n/a 00:03:27.981 00:03:27.981 Elapsed time = 0.000 seconds 00:03:27.981 00:03:27.981 real 0m0.004s 00:03:27.981 user 0m0.000s 00:03:27.981 sys 0m0.008s 00:03:27.981 10:07:33 unittest.unittest_ioat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.981 10:07:33 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:03:27.981 ************************************ 00:03:27.981 END TEST unittest_ioat 00:03:27.981 ************************************ 00:03:27.981 10:07:33 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:27.981 10:07:33 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.981 ************************************ 00:03:27.981 START TEST unittest_idxd_user 00:03:27.981 ************************************ 00:03:27.981 10:07:33 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:27.981 00:03:27.981 00:03:27.981 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.981 http://cunit.sourceforge.net/ 00:03:27.981 00:03:27.981 00:03:27.981 Suite: idxd_user 00:03:27.981 Test: test_idxd_wait_cmd ...[2024-06-10 10:07:33.386632] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:27.981 passed 00:03:27.981 Test: test_idxd_reset_dev ...passed 00:03:27.981 Test: test_idxd_group_config ...passed 00:03:27.981 Test: test_idxd_wq_config ...passed 00:03:27.981 00:03:27.981 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.981 suites 1 1 n/a 0 0 00:03:27.981 tests 4 4 4 0 0 00:03:27.981 asserts 20 20 20 0 n/a 00:03:27.981 00:03:27.981 Elapsed time = 0.000 seconds 00:03:27.981 [2024-06-10 10:07:33.386943] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:27.981 [2024-06-10 10:07:33.386980] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:27.981 [2024-06-10 10:07:33.387001] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:27.981 00:03:27.981 real 0m0.007s 00:03:27.981 user 0m0.007s 00:03:27.981 sys 0m0.000s 00:03:27.981 ************************************ 00:03:27.981 END TEST unittest_idxd_user 00:03:27.981 ************************************ 00:03:27.981 10:07:33 unittest.unittest_idxd_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.981 10:07:33 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:03:27.981 10:07:33 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.981 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.981 ************************************ 00:03:27.981 START TEST unittest_iscsi 00:03:27.981 ************************************ 00:03:27.981 10:07:33 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # unittest_iscsi 00:03:27.981 10:07:33 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:27.981 00:03:27.981 00:03:27.981 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.981 http://cunit.sourceforge.net/ 00:03:27.981 00:03:27.981 00:03:27.981 Suite: conn_suite 00:03:27.981 Test: read_task_split_in_order_case ...passed 00:03:27.981 Test: read_task_split_reverse_order_case ...passed 00:03:27.981 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:27.981 Test: process_non_read_task_completion_test ...passed 00:03:27.981 Test: free_tasks_on_connection ...passed 00:03:27.981 Test: free_tasks_with_queued_datain ...passed 00:03:27.981 Test: abort_queued_datain_task_test ...passed 00:03:27.981 Test: abort_queued_datain_tasks_test ...passed 00:03:27.981 00:03:27.981 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.981 suites 1 1 n/a 0 0 00:03:27.981 tests 8 8 8 0 0 00:03:27.982 asserts 230 230 230 0 n/a 00:03:27.982 00:03:27.982 Elapsed time = 0.000 seconds 00:03:27.982 10:07:33 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:27.982 00:03:27.982 00:03:27.982 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.982 http://cunit.sourceforge.net/ 00:03:27.982 00:03:27.982 00:03:27.982 Suite: iscsi_suite 00:03:27.982 Test: param_negotiation_test ...passed 00:03:27.982 Test: list_negotiation_test ...passed 00:03:27.982 Test: parse_valid_test ...passed 00:03:27.982 Test: parse_invalid_test ...passed 00:03:27.982 00:03:27.982 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.982 suites 1 1 n/a 0 0 00:03:27.982 tests 4 4 4 0 0 00:03:27.982 asserts 161 161 161 0 n/a 00:03:27.982 00:03:27.982 Elapsed time = 0.000 seconds 00:03:27.982 [2024-06-10 10:07:33.437828] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:27.982 [2024-06-10 10:07:33.437989] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:27.982 [2024-06-10 10:07:33.438003] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:27.982 [2024-06-10 10:07:33.438025] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:27.982 [2024-06-10 10:07:33.438039] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:27.982 [2024-06-10 10:07:33.438050] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:27.982 [2024-06-10 10:07:33.438060] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:27.982 10:07:33 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:27.982 00:03:27.982 00:03:27.982 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.982 http://cunit.sourceforge.net/ 00:03:27.982 00:03:27.982 00:03:27.982 Suite: iscsi_target_node_suite 00:03:27.982 Test: add_lun_test_cases ...[2024-06-10 10:07:33.442992] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:27.982 passed 00:03:27.982 Test: allow_any_allowed ...passed 00:03:27.982 Test: allow_ipv6_allowed ...passed 00:03:27.982 Test: allow_ipv6_denied ...passed 00:03:27.982 Test: allow_ipv6_invalid ...passed 00:03:27.982 Test: allow_ipv4_allowed ...passed 00:03:27.982 Test: allow_ipv4_denied ...passed 00:03:27.982 Test: allow_ipv4_invalid ...passed 00:03:27.982 Test: node_access_allowed ...passed 00:03:27.982 Test: node_access_denied_by_empty_netmask ...passed 00:03:27.982 Test: node_access_multi_initiator_groups_cases ...passed 00:03:27.982 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:27.982 Test: chap_param_test_cases ...passed 00:03:27.982 00:03:27.982 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.982 suites 1 1 n/a 0 0 00:03:27.982 tests 13 13 13 0 0 00:03:27.982 asserts 50 50 50 0 n/a 00:03:27.982 00:03:27.982 Elapsed time = 0.000 seconds 00:03:27.982 [2024-06-10 10:07:33.443142] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:27.982 [2024-06-10 10:07:33.443154] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:27.982 [2024-06-10 10:07:33.443164] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:27.982 [2024-06-10 10:07:33.443172] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:27.982 [2024-06-10 10:07:33.443260] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:27.982 [2024-06-10 10:07:33.443273] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:27.982 [2024-06-10 10:07:33.443282] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:27.982 [2024-06-10 10:07:33.443290] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:27.982 [2024-06-10 10:07:33.443298] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:27.982 10:07:33 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:27.982 00:03:27.982 00:03:27.982 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.982 http://cunit.sourceforge.net/ 00:03:27.982 00:03:27.982 00:03:27.982 Suite: iscsi_suite 00:03:27.982 Test: op_login_check_target_test ...passed 00:03:27.982 Test: op_login_session_normal_test ...[2024-06-10 10:07:33.449931] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:03:27.982 [2024-06-10 10:07:33.450204] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:27.982 [2024-06-10 10:07:33.450227] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:27.982 [2024-06-10 10:07:33.450246] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:27.982 [2024-06-10 10:07:33.450482] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:27.982 [2024-06-10 10:07:33.450524] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:27.982 [2024-06-10 10:07:33.450653] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:27.982 [2024-06-10 10:07:33.450693] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:27.982 passed 00:03:27.982 Test: maxburstlength_test ...passed 00:03:27.982 Test: underflow_for_read_transfer_test ...passed 00:03:27.982 Test: underflow_for_zero_read_transfer_test ...passed 00:03:27.982 Test: underflow_for_request_sense_test ...passed 00:03:27.982 Test: underflow_for_check_condition_test ...passed 00:03:27.982 Test: add_transfer_task_test ...passed 00:03:27.982 Test: get_transfer_task_test ...passed 00:03:27.982 Test: del_transfer_task_test ...passed 00:03:27.982 Test: clear_all_transfer_tasks_test ...passed 00:03:27.982 Test: build_iovs_test ...[2024-06-10 10:07:33.451189] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:27.982 [2024-06-10 10:07:33.451237] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4557:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:27.982 passed 00:03:27.982 Test: build_iovs_with_md_test ...passed 00:03:27.982 Test: pdu_hdr_op_login_test ...passed 00:03:27.982 Test: pdu_hdr_op_text_test ...passed 00:03:27.982 Test: pdu_hdr_op_logout_test ...passed 00:03:27.982 Test: pdu_hdr_op_scsi_test ...[2024-06-10 10:07:33.451491] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:27.982 [2024-06-10 10:07:33.451518] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:27.982 [2024-06-10 10:07:33.451538] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:27.982 [2024-06-10 10:07:33.451563] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2247:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:27.982 [2024-06-10 10:07:33.451582] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:27.982 [2024-06-10 10:07:33.451610] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2292:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:27.982 [2024-06-10 10:07:33.451633] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2523:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:27.982 [2024-06-10 10:07:33.451659] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:27.982 [2024-06-10 10:07:33.451677] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:27.982 [2024-06-10 10:07:33.451694] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:27.982 passed 00:03:27.982 Test: pdu_hdr_op_task_mgmt_test ...[2024-06-10 10:07:33.451714] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:27.982 [2024-06-10 10:07:33.451781] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3411:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:27.982 [2024-06-10 10:07:33.451836] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:27.982 [2024-06-10 10:07:33.451897] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:27.982 [2024-06-10 10:07:33.451950] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:27.982 passed 00:03:27.982 Test: pdu_hdr_op_nopout_test ...passed 00:03:27.982 Test: pdu_hdr_op_data_test ...passed 00:03:27.982 Test: empty_text_with_cbit_test ...passed 00:03:27.982 Test: pdu_payload_read_test ...[2024-06-10 10:07:33.452023] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:27.982 [2024-06-10 10:07:33.452044] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:27.982 [2024-06-10 10:07:33.452061] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:27.982 [2024-06-10 10:07:33.452078] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:27.982 [2024-06-10 10:07:33.452100] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:27.982 [2024-06-10 10:07:33.452119] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:27.982 [2024-06-10 10:07:33.452137] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:27.983 [2024-06-10 10:07:33.452155] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4223:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:27.983 [2024-06-10 10:07:33.452174] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:27.983 [2024-06-10 10:07:33.452192] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:27.983 [2024-06-10 10:07:33.452210] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:27.983 [2024-06-10 10:07:33.452825] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4638:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:27.983 passed 00:03:27.983 Test: data_out_pdu_sequence_test ...passed 00:03:27.983 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:27.983 00:03:27.983 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.983 suites 1 1 n/a 0 0 00:03:27.983 tests 24 24 24 0 0 00:03:27.983 asserts 150253 150253 150253 0 n/a 00:03:27.983 00:03:27.983 Elapsed time = 0.000 seconds 00:03:27.983 10:07:33 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:27.983 00:03:27.983 00:03:27.983 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.983 http://cunit.sourceforge.net/ 00:03:27.983 00:03:27.983 00:03:27.983 Suite: init_grp_suite 00:03:27.983 Test: create_initiator_group_success_case ...passed 00:03:27.983 Test: find_initiator_group_success_case ...passed 00:03:27.983 Test: register_initiator_group_twice_case ...passed 00:03:27.983 Test: add_initiator_name_success_case ...passed 00:03:27.983 Test: add_initiator_name_fail_case ...[2024-06-10 10:07:33.462824] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:27.983 passed 00:03:27.983 Test: delete_all_initiator_names_success_case ...passed 00:03:27.983 Test: add_netmask_success_case ...passed 00:03:27.983 Test: add_netmask_fail_case ...passed 00:03:27.983 Test: delete_all_netmasks_success_case ...passed 00:03:27.983 Test: initiator_name_overwrite_all_to_any_case ...passed[2024-06-10 10:07:33.462990] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:27.983 00:03:27.983 Test: netmask_overwrite_all_to_any_case ...passed 00:03:27.983 Test: add_delete_initiator_names_case ...passed 00:03:27.983 Test: add_duplicated_initiator_names_case ...passed 00:03:27.983 Test: delete_nonexisting_initiator_names_case ...passed 00:03:27.983 Test: add_delete_netmasks_case ...passed 00:03:27.983 Test: add_duplicated_netmasks_case ...passed 00:03:27.983 Test: delete_nonexisting_netmasks_case ...passed 00:03:27.983 00:03:27.983 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.983 suites 1 1 n/a 0 0 00:03:27.983 tests 17 17 17 0 0 00:03:27.983 asserts 108 108 108 0 n/a 00:03:27.983 00:03:27.983 Elapsed time = 0.000 seconds 00:03:27.983 10:07:33 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:27.983 00:03:27.983 00:03:27.983 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.983 http://cunit.sourceforge.net/ 00:03:27.983 00:03:27.983 00:03:27.983 Suite: portal_grp_suite 00:03:27.983 Test: portal_create_ipv4_normal_case ...passed 00:03:27.983 Test: portal_create_ipv6_normal_case ...passed 00:03:27.983 Test: portal_create_ipv4_wildcard_case ...passed 00:03:27.983 Test: portal_create_ipv6_wildcard_case ...passed 00:03:27.983 Test: portal_create_twice_case ...[2024-06-10 10:07:33.470067] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:27.983 passed 00:03:27.983 Test: portal_grp_register_unregister_case ...passed 00:03:27.983 Test: portal_grp_register_twice_case ...passed 00:03:27.983 Test: portal_grp_add_delete_case ...passed 00:03:27.983 Test: portal_grp_add_delete_twice_case ...passed 00:03:27.983 00:03:27.983 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.983 suites 1 1 n/a 0 0 00:03:27.983 tests 9 9 9 0 0 00:03:27.983 asserts 44 44 44 0 n/a 00:03:27.983 00:03:27.983 Elapsed time = 0.000 seconds 00:03:27.983 00:03:27.983 real 0m0.046s 00:03:27.983 user 0m0.019s 00:03:27.983 sys 0m0.027s 00:03:27.983 ************************************ 00:03:27.983 END TEST unittest_iscsi 00:03:27.983 ************************************ 00:03:27.983 10:07:33 unittest.unittest_iscsi -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.983 10:07:33 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:03:27.983 10:07:33 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:03:27.983 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.983 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.983 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.983 ************************************ 00:03:27.983 START TEST unittest_json 00:03:27.983 ************************************ 00:03:27.983 10:07:33 unittest.unittest_json -- common/autotest_common.sh@1124 -- # unittest_json 00:03:27.983 10:07:33 unittest.unittest_json -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:27.983 00:03:27.983 00:03:27.983 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.983 http://cunit.sourceforge.net/ 00:03:27.983 00:03:27.983 00:03:27.983 Suite: json 00:03:27.983 Test: test_parse_literal ...passed 00:03:27.983 Test: test_parse_string_simple ...passed 00:03:27.983 Test: test_parse_string_control_chars ...passed 00:03:27.983 Test: test_parse_string_utf8 ...passed 00:03:27.983 Test: test_parse_string_escapes_twochar ...passed 00:03:27.983 Test: test_parse_string_escapes_unicode ...passed 00:03:27.983 Test: test_parse_number ...passed 00:03:27.983 Test: test_parse_array ...passed 00:03:27.983 Test: test_parse_object ...passed 00:03:27.983 Test: test_parse_nesting ...passed 00:03:27.983 Test: test_parse_comment ...passed 00:03:27.983 00:03:27.983 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.983 suites 1 1 n/a 0 0 00:03:27.983 tests 11 11 11 0 0 00:03:27.983 asserts 1516 1516 1516 0 n/a 00:03:27.983 00:03:27.983 Elapsed time = 0.000 seconds 00:03:27.983 10:07:33 unittest.unittest_json -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:27.983 00:03:27.983 00:03:27.983 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.983 http://cunit.sourceforge.net/ 00:03:27.983 00:03:27.983 00:03:27.983 Suite: json 00:03:27.983 Test: test_strequal ...passed 00:03:27.983 Test: test_num_to_uint16 ...passed 00:03:27.983 Test: test_num_to_int32 ...passed 00:03:27.983 Test: test_num_to_uint64 ...passed 00:03:27.983 Test: test_decode_object ...passed 00:03:27.983 Test: test_decode_array ...passed 00:03:27.983 Test: test_decode_bool ...passed 00:03:27.983 Test: test_decode_uint16 ...passed 00:03:27.983 Test: test_decode_int32 ...passed 00:03:27.983 Test: test_decode_uint32 ...passed 00:03:27.983 Test: test_decode_uint64 ...passed 00:03:27.983 Test: test_decode_string ...passed 00:03:27.983 Test: test_decode_uuid ...passed 00:03:27.983 Test: test_find ...passed 00:03:27.983 Test: test_find_array ...passed 00:03:27.983 Test: test_iterating ...passed 00:03:27.983 Test: test_free_object ...passed 00:03:27.983 00:03:27.983 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.983 suites 1 1 n/a 0 0 00:03:27.983 tests 17 17 17 0 0 00:03:27.983 asserts 236 236 236 0 n/a 00:03:27.983 00:03:27.983 Elapsed time = 0.000 seconds 00:03:27.983 10:07:33 unittest.unittest_json -- unit/unittest.sh@79 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:27.983 00:03:27.983 00:03:27.983 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.983 http://cunit.sourceforge.net/ 00:03:27.983 00:03:27.983 00:03:27.983 Suite: json 00:03:27.983 Test: test_write_literal ...passed 00:03:27.983 Test: test_write_string_simple ...passed 00:03:27.983 Test: test_write_string_escapes ...passed 00:03:27.983 Test: test_write_string_utf16le ...passed 00:03:27.983 Test: test_write_number_int32 ...passed 00:03:27.983 Test: test_write_number_uint32 ...passed 00:03:27.983 Test: test_write_number_uint128 ...passed 00:03:27.983 Test: test_write_string_number_uint128 ...passed 00:03:27.983 Test: test_write_number_int64 ...passed 00:03:27.983 Test: test_write_number_uint64 ...passed 00:03:27.983 Test: test_write_number_double ...passed 00:03:27.983 Test: test_write_uuid ...passed 00:03:27.983 Test: test_write_array ...passed 00:03:27.983 Test: test_write_object ...passed 00:03:27.984 Test: test_write_nesting ...passed 00:03:27.984 Test: test_write_val ...passed 00:03:27.984 00:03:27.984 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.984 suites 1 1 n/a 0 0 00:03:27.984 tests 16 16 16 0 0 00:03:27.984 asserts 918 918 918 0 n/a 00:03:27.984 00:03:27.984 Elapsed time = 0.000 seconds 00:03:27.984 10:07:33 unittest.unittest_json -- unit/unittest.sh@80 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:27.984 00:03:27.984 00:03:27.984 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.984 http://cunit.sourceforge.net/ 00:03:27.984 00:03:27.984 00:03:27.984 Suite: jsonrpc 00:03:27.984 Test: test_parse_request ...passed 00:03:27.984 Test: test_parse_request_streaming ...passed 00:03:27.984 00:03:27.984 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.984 suites 1 1 n/a 0 0 00:03:27.984 tests 2 2 2 0 0 00:03:27.984 asserts 289 289 289 0 n/a 00:03:27.984 00:03:27.984 Elapsed time = 0.000 seconds 00:03:27.984 00:03:27.984 real 0m0.027s 00:03:27.984 user 0m0.017s 00:03:27.984 sys 0m0.011s 00:03:27.984 10:07:33 unittest.unittest_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.984 10:07:33 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:03:27.984 ************************************ 00:03:27.984 END TEST unittest_json 00:03:27.984 ************************************ 00:03:27.984 10:07:33 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:03:27.984 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:27.984 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:27.984 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.984 ************************************ 00:03:27.984 START TEST unittest_rpc 00:03:27.984 ************************************ 00:03:27.984 10:07:33 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # unittest_rpc 00:03:27.984 10:07:33 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:27.984 00:03:27.984 00:03:27.984 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.984 http://cunit.sourceforge.net/ 00:03:27.984 00:03:27.984 00:03:27.984 Suite: rpc 00:03:27.984 Test: test_jsonrpc_handler ...passed 00:03:27.984 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:27.984 Test: test_rpc_get_methods ...[2024-06-10 10:07:33.569478] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:27.984 passed 00:03:27.984 Test: test_rpc_spdk_get_version ...passed 00:03:27.984 Test: test_spdk_rpc_listen_close ...passed 00:03:27.984 Test: test_rpc_run_multiple_servers ...passed 00:03:27.984 00:03:27.984 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.984 suites 1 1 n/a 0 0 00:03:27.984 tests 6 6 6 0 0 00:03:27.984 asserts 23 23 23 0 n/a 00:03:27.984 00:03:27.984 Elapsed time = 0.000 seconds 00:03:27.984 00:03:27.984 real 0m0.006s 00:03:27.984 user 0m0.005s 00:03:27.984 sys 0m0.004s 00:03:27.984 10:07:33 unittest.unittest_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:27.984 10:07:33 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.984 ************************************ 00:03:27.984 END TEST unittest_rpc 00:03:27.984 ************************************ 00:03:28.243 10:07:33 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:28.243 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:28.243 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:28.243 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:28.243 ************************************ 00:03:28.243 START TEST unittest_notify 00:03:28.243 ************************************ 00:03:28.243 10:07:33 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:28.243 00:03:28.243 00:03:28.243 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.243 http://cunit.sourceforge.net/ 00:03:28.243 00:03:28.243 00:03:28.243 Suite: app_suite 00:03:28.243 Test: notify ...passed 00:03:28.243 00:03:28.243 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.243 suites 1 1 n/a 0 0 00:03:28.243 tests 1 1 1 0 0 00:03:28.243 asserts 13 13 13 0 n/a 00:03:28.243 00:03:28.243 Elapsed time = 0.000 seconds 00:03:28.243 00:03:28.243 real 0m0.006s 00:03:28.243 user 0m0.005s 00:03:28.243 sys 0m0.000s 00:03:28.243 10:07:33 unittest.unittest_notify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:28.243 10:07:33 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:03:28.243 ************************************ 00:03:28.243 END TEST unittest_notify 00:03:28.244 ************************************ 00:03:28.244 10:07:33 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:03:28.244 10:07:33 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:28.244 10:07:33 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:28.244 10:07:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:28.244 ************************************ 00:03:28.244 START TEST unittest_nvme 00:03:28.244 ************************************ 00:03:28.244 10:07:33 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # unittest_nvme 00:03:28.244 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:28.244 00:03:28.244 00:03:28.244 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.244 http://cunit.sourceforge.net/ 00:03:28.244 00:03:28.244 00:03:28.244 Suite: nvme 00:03:28.244 Test: test_opc_data_transfer ...passed 00:03:28.244 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:28.244 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:28.244 Test: test_trid_parse_and_compare ...[2024-06-10 10:07:33.654649] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:28.244 [2024-06-10 10:07:33.654936] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:28.244 [2024-06-10 10:07:33.654960] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1189:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:28.244 [2024-06-10 10:07:33.654977] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:28.244 [2024-06-10 10:07:33.654992] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:03:28.244 [2024-06-10 10:07:33.655007] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:28.244 passed 00:03:28.244 Test: test_trid_trtype_str ...passed 00:03:28.244 Test: test_trid_adrfam_str ...passed 00:03:28.244 Test: test_nvme_ctrlr_probe ...passed[2024-06-10 10:07:33.655166] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:28.244 00:03:28.244 Test: test_spdk_nvme_probe ...[2024-06-10 10:07:33.655629] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:28.244 [2024-06-10 10:07:33.655644] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:28.244 [2024-06-10 10:07:33.655656] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:28.244 [2024-06-10 10:07:33.655665] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:28.244 passed 00:03:28.244 Test: test_spdk_nvme_connect ...passed 00:03:28.244 Test: test_nvme_ctrlr_probe_internal ...[2024-06-10 10:07:33.655695] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:28.244 [2024-06-10 10:07:33.655746] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:28.244 [2024-06-10 10:07:33.655755] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:03:28.244 passed 00:03:28.244 Test: test_nvme_init_controllers ...passed 00:03:28.244 Test: test_nvme_driver_init ...[2024-06-10 10:07:33.655918] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:28.244 [2024-06-10 10:07:33.655949] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:28.244 [2024-06-10 10:07:33.655972] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:28.244 [2024-06-10 10:07:33.655992] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:28.244 [2024-06-10 10:07:33.656003] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:28.244 [2024-06-10 10:07:33.766538] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:28.244 passed 00:03:28.244 Test: test_spdk_nvme_detach ...passed 00:03:28.244 Test: test_nvme_completion_poll_cb ...passed 00:03:28.244 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:28.244 Test: test_nvme_allocate_request_null ...passed 00:03:28.244 Test: test_nvme_allocate_request ...passed 00:03:28.244 Test: test_nvme_free_request ...passed 00:03:28.244 Test: test_nvme_allocate_request_user_copy ...passed 00:03:28.244 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:28.244 Test: test_nvme_request_check_timeout ...passed 00:03:28.244 Test: test_nvme_wait_for_completion ...passed 00:03:28.244 Test: test_spdk_nvme_parse_func ...passed 00:03:28.244 Test: test_spdk_nvme_detach_async ...passed 00:03:28.244 Test: test_nvme_parse_addr ...passed 00:03:28.244 00:03:28.244 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.244 suites 1 1 n/a 0 0 00:03:28.244 tests 25 25 25 0 0 00:03:28.244 asserts 326 326 326 0 n/a 00:03:28.244 00:03:28.244 Elapsed time = 0.000 seconds 00:03:28.244 [2024-06-10 10:07:33.766833] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:28.244 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:28.244 00:03:28.244 00:03:28.244 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.244 http://cunit.sourceforge.net/ 00:03:28.244 00:03:28.244 00:03:28.244 Suite: nvme_ctrlr 00:03:28.244 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-06-10 10:07:33.775088] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 passed 00:03:28.244 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-06-10 10:07:33.776639] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 passed 00:03:28.244 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-06-10 10:07:33.777858] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 passed 00:03:28.244 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-06-10 10:07:33.779057] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 passed 00:03:28.244 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-06-10 10:07:33.780472] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 [2024-06-10 10:07:33.781653] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 10:07:33.782809] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 10:07:33.783962] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:28.244 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-06-10 10:07:33.786267] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 [2024-06-10 10:07:33.788530] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 10:07:33.789671] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:28.244 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-06-10 10:07:33.791976] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 [2024-06-10 10:07:33.793126] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-10 10:07:33.795391] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3947:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:28.244 Test: test_nvme_ctrlr_init_delay ...[2024-06-10 10:07:33.797693] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 passed 00:03:28.244 Test: test_alloc_io_qpair_rr_1 ...[2024-06-10 10:07:33.798874] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 [2024-06-10 10:07:33.798921] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:28.244 passed 00:03:28.244 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:28.244 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:28.244 Test: test_alloc_io_qpair_wrr_1 ...[2024-06-10 10:07:33.798948] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:28.244 [2024-06-10 10:07:33.798967] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:28.244 [2024-06-10 10:07:33.798985] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:28.244 [2024-06-10 10:07:33.799072] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 passed 00:03:28.244 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:28.244 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-06-10 10:07:33.799114] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.244 [2024-06-10 10:07:33.799141] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:28.244 [2024-06-10 10:07:33.799187] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4858:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:28.244 [2024-06-10 10:07:33.799206] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:28.244 [2024-06-10 10:07:33.799225] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4935:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:28.244 passed 00:03:28.244 Test: test_nvme_ctrlr_fail ...passed 00:03:28.244 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:28.244 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:28.244 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:28.245 Test: test_nvme_ctrlr_test_active_ns ...[2024-06-10 10:07:33.799242] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4895:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:28.245 [2024-06-10 10:07:33.799266] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:28.245 [2024-06-10 10:07:33.799342] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.245 passed 00:03:28.245 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:28.245 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:28.245 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:28.245 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-06-10 10:07:33.836464] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.503 passed 00:03:28.503 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-06-10 10:07:33.843091] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.503 passed 00:03:28.503 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-06-10 10:07:33.844215] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 [2024-06-10 10:07:33.844232] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2884:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:28.504 passed 00:03:28.504 Test: test_alloc_io_qpair_fail ...passed 00:03:28.504 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:28.504 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:28.504 Test: test_nvme_ctrlr_set_state ...[2024-06-10 10:07:33.845342] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 [2024-06-10 10:07:33.845358] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:28.504 [2024-06-10 10:07:33.845382] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1479:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-06-10 10:07:33.845394] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-06-10 10:07:33.848467] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_ns_mgmt ...[2024-06-10 10:07:33.855067] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_reset ...[2024-06-10 10:07:33.856227] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_aer_callback ...[2024-06-10 10:07:33.856291] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-06-10 10:07:33.857446] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:28.504 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:28.504 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-06-10 10:07:33.858684] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:28.504 Test: test_nvme_ctrlr_ana_resize ...[2024-06-10 10:07:33.859850] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:28.504 Test: test_nvme_transport_ctrlr_ready ...[2024-06-10 10:07:33.861037] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4029:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ctrlr_disable ...[2024-06-10 10:07:33.861064] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4081:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:03:28.504 [2024-06-10 10:07:33.861079] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4150:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:28.504 passed 00:03:28.504 00:03:28.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.504 suites 1 1 n/a 0 0 00:03:28.504 tests 43 43 43 0 0 00:03:28.504 asserts 10418 10418 10418 0 n/a 00:03:28.504 00:03:28.504 Elapsed time = 0.039 seconds 00:03:28.504 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:28.504 00:03:28.504 00:03:28.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.504 http://cunit.sourceforge.net/ 00:03:28.504 00:03:28.504 00:03:28.504 Suite: nvme_ctrlr_cmd 00:03:28.504 Test: test_get_log_pages ...passed 00:03:28.504 Test: test_set_feature_cmd ...passed 00:03:28.504 Test: test_set_feature_ns_cmd ...passed 00:03:28.504 Test: test_get_feature_cmd ...passed 00:03:28.504 Test: test_get_feature_ns_cmd ...passed 00:03:28.504 Test: test_abort_cmd ...passed 00:03:28.504 Test: test_set_host_id_cmds ...passed 00:03:28.504 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:28.504 Test: test_io_raw_cmd ...passed 00:03:28.504 Test: test_io_raw_cmd_with_md ...passed 00:03:28.504 Test: test_namespace_attach ...passed 00:03:28.504 Test: test_namespace_detach ...passed 00:03:28.504 Test: test_namespace_create ...passed 00:03:28.504 Test: test_namespace_delete ...passed 00:03:28.504 Test: test_doorbell_buffer_config ...passed 00:03:28.504 Test: test_format_nvme ...passed 00:03:28.504 Test: test_fw_commit ...passed 00:03:28.504 Test: test_fw_image_download ...passed 00:03:28.504 Test: test_sanitize ...passed 00:03:28.504 Test: test_directive ...passed 00:03:28.504 Test: test_nvme_request_add_abort ...passed 00:03:28.504 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:28.504 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:28.504 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:28.504 00:03:28.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.504 suites 1 1 n/a 0 0 00:03:28.504 tests 24 24 24 0 0 00:03:28.504 asserts 198 198 198 0 n/a 00:03:28.504 00:03:28.504 Elapsed time = 0.000 seconds 00:03:28.504 [2024-06-10 10:07:33.870409] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:28.504 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:28.504 00:03:28.504 00:03:28.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.504 http://cunit.sourceforge.net/ 00:03:28.504 00:03:28.504 00:03:28.504 Suite: nvme_ctrlr_cmd 00:03:28.504 Test: test_geometry_cmd ...passed 00:03:28.504 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:28.504 00:03:28.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.504 suites 1 1 n/a 0 0 00:03:28.504 tests 2 2 2 0 0 00:03:28.504 asserts 7 7 7 0 n/a 00:03:28.504 00:03:28.504 Elapsed time = 0.000 seconds 00:03:28.504 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:28.504 00:03:28.504 00:03:28.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.504 http://cunit.sourceforge.net/ 00:03:28.504 00:03:28.504 00:03:28.504 Suite: nvme 00:03:28.504 Test: test_nvme_ns_construct ...passed 00:03:28.504 Test: test_nvme_ns_uuid ...passed 00:03:28.504 Test: test_nvme_ns_csi ...passed 00:03:28.504 Test: test_nvme_ns_data ...passed 00:03:28.504 Test: test_nvme_ns_set_identify_data ...passed 00:03:28.504 Test: test_spdk_nvme_ns_get_values ...passed 00:03:28.504 Test: test_spdk_nvme_ns_is_active ...passed 00:03:28.504 Test: spdk_nvme_ns_supports ...passed 00:03:28.504 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:28.504 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:28.504 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:28.504 Test: test_nvme_ns_find_id_desc ...passed 00:03:28.504 00:03:28.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.504 suites 1 1 n/a 0 0 00:03:28.504 tests 12 12 12 0 0 00:03:28.504 asserts 83 83 83 0 n/a 00:03:28.504 00:03:28.504 Elapsed time = 0.000 seconds 00:03:28.504 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:28.504 00:03:28.504 00:03:28.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.504 http://cunit.sourceforge.net/ 00:03:28.504 00:03:28.504 00:03:28.504 Suite: nvme_ns_cmd 00:03:28.504 Test: split_test ...passed 00:03:28.504 Test: split_test2 ...passed 00:03:28.504 Test: split_test3 ...passed 00:03:28.504 Test: split_test4 ...passed 00:03:28.504 Test: test_nvme_ns_cmd_flush ...passed 00:03:28.504 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:28.504 Test: test_nvme_ns_cmd_copy ...passed 00:03:28.504 Test: test_io_flags ...[2024-06-10 10:07:33.890773] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:28.504 passed 00:03:28.504 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:28.504 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:28.504 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:28.504 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:28.504 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:28.504 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:28.504 Test: test_cmd_child_request ...passed 00:03:28.504 Test: test_nvme_ns_cmd_readv ...passed 00:03:28.504 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:28.504 Test: test_nvme_ns_cmd_writev ...passed 00:03:28.504 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:28.504 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:28.504 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:28.504 Test: test_nvme_ns_cmd_comparev ...passed 00:03:28.504 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:28.504 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:28.504 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:28.504 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:28.504 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:28.504 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:28.505 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:03:28.505 Test: test_nvme_ns_cmd_verify ...passed 00:03:28.505 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:28.505 Test: test_nvme_ns_cmd_io_mgmt_recv ...[2024-06-10 10:07:33.891081] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:28.505 [2024-06-10 10:07:33.891224] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:28.505 [2024-06-10 10:07:33.891248] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:28.505 passed 00:03:28.505 00:03:28.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.505 suites 1 1 n/a 0 0 00:03:28.505 tests 32 32 32 0 0 00:03:28.505 asserts 550 550 550 0 n/a 00:03:28.505 00:03:28.505 Elapsed time = 0.000 seconds 00:03:28.505 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:28.505 00:03:28.505 00:03:28.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.505 http://cunit.sourceforge.net/ 00:03:28.505 00:03:28.505 00:03:28.505 Suite: nvme_ns_cmd 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:28.505 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:28.505 00:03:28.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.505 suites 1 1 n/a 0 0 00:03:28.505 tests 12 12 12 0 0 00:03:28.505 asserts 123 123 123 0 n/a 00:03:28.505 00:03:28.505 Elapsed time = 0.000 seconds 00:03:28.505 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:28.505 00:03:28.505 00:03:28.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.505 http://cunit.sourceforge.net/ 00:03:28.505 00:03:28.505 00:03:28.505 Suite: nvme_qpair 00:03:28.505 Test: test3 ...passed 00:03:28.505 Test: test_ctrlr_failed ...passed 00:03:28.505 Test: struct_packing ...passed 00:03:28.505 Test: test_nvme_qpair_process_completions ...[2024-06-10 10:07:33.907270] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:28.505 [2024-06-10 10:07:33.908266] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:28.505 [2024-06-10 10:07:33.908355] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:28.505 passed 00:03:28.505 Test: test_nvme_completion_is_retry ...passed 00:03:28.505 Test: test_get_status_string ...passed 00:03:28.505 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:28.505 Test: test_nvme_qpair_submit_request ...passed 00:03:28.505 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:28.505 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:28.505 Test: test_nvme_qpair_init_deinit ...[2024-06-10 10:07:33.908710] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:28.505 [2024-06-10 10:07:33.908839] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:28.505 passed 00:03:28.505 Test: test_nvme_get_sgl_print_info ...passed 00:03:28.505 00:03:28.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.505 suites 1 1 n/a 0 0 00:03:28.505 tests 12 12 12 0 0 00:03:28.505 asserts 154 154 154 0 n/a 00:03:28.505 00:03:28.505 Elapsed time = 0.000 seconds 00:03:28.505 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:28.505 00:03:28.505 00:03:28.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.505 http://cunit.sourceforge.net/ 00:03:28.505 00:03:28.505 00:03:28.505 Suite: nvme_pcie 00:03:28.505 Test: test_prp_list_append ...[2024-06-10 10:07:33.913199] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:28.505 [2024-06-10 10:07:33.913382] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:28.505 [2024-06-10 10:07:33.913405] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:28.505 [2024-06-10 10:07:33.913458] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:28.505 [2024-06-10 10:07:33.913490] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:28.505 passed 00:03:28.505 Test: test_nvme_pcie_hotplug_monitor ...passed 00:03:28.505 Test: test_shadow_doorbell_update ...passed 00:03:28.505 Test: test_build_contig_hw_sgl_request ...passed 00:03:28.505 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:28.505 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:28.505 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:28.505 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:28.505 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:28.505 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:28.505 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:28.505 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:03:28.505 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:28.505 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-06-10 10:07:33.913586] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:28.505 [2024-06-10 10:07:33.913627] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:28.505 [2024-06-10 10:07:33.913645] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:28.505 [2024-06-10 10:07:33.913671] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:28.505 [2024-06-10 10:07:33.913693] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:28.505 passed 00:03:28.505 00:03:28.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.505 suites 1 1 n/a 0 0 00:03:28.505 tests 14 14 14 0 0 00:03:28.505 asserts 235 235 235 0 n/a 00:03:28.505 00:03:28.505 Elapsed time = 0.000 seconds 00:03:28.505 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:28.505 00:03:28.505 00:03:28.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.505 http://cunit.sourceforge.net/ 00:03:28.505 00:03:28.505 00:03:28.505 Suite: nvme_ns_cmd 00:03:28.505 Test: nvme_poll_group_create_test ...passed 00:03:28.505 Test: nvme_poll_group_add_remove_test ...passed 00:03:28.505 Test: nvme_poll_group_process_completions ...passed 00:03:28.505 Test: nvme_poll_group_destroy_test ...passed 00:03:28.505 Test: nvme_poll_group_get_free_stats ...passed 00:03:28.505 00:03:28.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.505 suites 1 1 n/a 0 0 00:03:28.505 tests 5 5 5 0 0 00:03:28.505 asserts 75 75 75 0 n/a 00:03:28.505 00:03:28.505 Elapsed time = 0.000 seconds 00:03:28.505 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:28.505 00:03:28.505 00:03:28.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.505 http://cunit.sourceforge.net/ 00:03:28.505 00:03:28.505 00:03:28.505 Suite: nvme_quirks 00:03:28.505 Test: test_nvme_quirks_striping ...passed 00:03:28.505 00:03:28.505 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.505 suites 1 1 n/a 0 0 00:03:28.505 tests 1 1 1 0 0 00:03:28.505 asserts 5 5 5 0 n/a 00:03:28.505 00:03:28.505 Elapsed time = 0.000 seconds 00:03:28.505 10:07:33 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:28.505 00:03:28.505 00:03:28.505 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.505 http://cunit.sourceforge.net/ 00:03:28.505 00:03:28.505 00:03:28.505 Suite: nvme_tcp 00:03:28.505 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:28.505 Test: test_nvme_tcp_build_iovs ...passed 00:03:28.505 Test: test_nvme_tcp_build_sgl_request ...passed 00:03:28.505 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...[2024-06-10 10:07:33.928820] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8205bca98, and the iovcnt=16, remaining_size=28672 00:03:28.505 passed 00:03:28.505 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:28.506 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:28.506 Test: test_nvme_tcp_req_get ...passed 00:03:28.506 Test: test_nvme_tcp_req_init ...passed 00:03:28.506 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:28.506 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:28.506 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:28.506 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:28.506 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-06-10 10:07:33.929119] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(6) to be set 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_pdu_ch_handle ...[2024-06-10 10:07:33.929172] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929199] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8205bddb8 00:03:28.506 [2024-06-10 10:07:33.929218] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1227:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:28.506 [2024-06-10 10:07:33.929238] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929257] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:28.506 [2024-06-10 10:07:33.929278] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929300] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:28.506 [2024-06-10 10:07:33.929318] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929341] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929361] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929382] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929404] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.929423] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_qpair_connect_sock ...[2024-06-10 10:07:33.929463] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:28.506 [2024-06-10 10:07:33.929482] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:28.506 [2024-06-10 10:07:33.969984] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:28.506 Test: test_nvme_tcp_c2h_payload_handle ...[2024-06-10 10:07:33.970081] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8205be1f0): PDU Sequence Error 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_icresp_handle ...[2024-06-10 10:07:33.970112] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:28.506 [2024-06-10 10:07:33.970133] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1575:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:28.506 [2024-06-10 10:07:33.970153] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.970172] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:28.506 [2024-06-10 10:07:33.970192] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.970225] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205be628 is same with the state(0) to be set 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_pdu_payload_handle ...[2024-06-10 10:07:33.970264] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1342:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8205be1f0): PDU Sequence Error 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-06-10 10:07:33.970298] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8205be628 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:03:28.506 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-06-10 10:07:33.970380] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8205bc388, errno=0, rc=0 00:03:28.506 [2024-06-10 10:07:33.970401] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205bc388 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.970429] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 324:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205bc388 is same with the state(5) to be set 00:03:28.506 [2024-06-10 10:07:33.970517] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8205bc388 (0): No error: 0 00:03:28.506 [2024-06-10 10:07:33.970550] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2177:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8205bc388 (0): No error: 0 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-06-10 10:07:34.040162] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:28.506 [2024-06-10 10:07:34.040215] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:28.506 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:03:28.506 Test: test_nvme_tcp_ctrlr_construct ...[2024-06-10 10:07:34.040261] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:28.506 [2024-06-10 10:07:34.040274] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:28.506 passed 00:03:28.506 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:28.506 00:03:28.506 [2024-06-10 10:07:34.040319] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2508:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:28.506 [2024-06-10 10:07:34.040328] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:28.506 [2024-06-10 10:07:34.040342] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:28.506 [2024-06-10 10:07:34.040351] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:28.506 [2024-06-10 10:07:34.040366] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2375:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82e4df000 with addr=192.168.1.78, port=23 00:03:28.506 [2024-06-10 10:07:34.040375] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:28.506 [2024-06-10 10:07:34.040394] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 826:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82e4b2180, and the iovcnt=1, remaining_size=1024 00:03:28.506 [2024-06-10 10:07:34.040403] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:28.506 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.506 suites 1 1 n/a 0 0 00:03:28.506 tests 27 27 27 0 0 00:03:28.506 asserts 624 624 624 0 n/a 00:03:28.506 00:03:28.506 Elapsed time = 0.062 seconds 00:03:28.506 10:07:34 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:28.506 00:03:28.506 00:03:28.506 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.506 http://cunit.sourceforge.net/ 00:03:28.506 00:03:28.506 00:03:28.506 Suite: nvme_transport 00:03:28.506 Test: test_nvme_get_transport ...passed 00:03:28.506 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:28.506 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:28.506 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:28.506 Test: test_ctrlr_get_memory_domains ...passed 00:03:28.506 00:03:28.506 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.506 suites 1 1 n/a 0 0 00:03:28.506 tests 5 5 5 0 0 00:03:28.506 asserts 28 28 28 0 n/a 00:03:28.506 00:03:28.506 Elapsed time = 0.000 seconds 00:03:28.506 10:07:34 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:28.506 00:03:28.506 00:03:28.506 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.506 http://cunit.sourceforge.net/ 00:03:28.506 00:03:28.506 00:03:28.506 Suite: nvme_io_msg 00:03:28.506 Test: test_nvme_io_msg_send ...passed 00:03:28.506 Test: test_nvme_io_msg_process ...passed 00:03:28.507 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:28.507 00:03:28.507 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.507 suites 1 1 n/a 0 0 00:03:28.507 tests 3 3 3 0 0 00:03:28.507 asserts 56 56 56 0 n/a 00:03:28.507 00:03:28.507 Elapsed time = 0.000 seconds 00:03:28.507 10:07:34 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:28.507 00:03:28.507 00:03:28.507 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.507 http://cunit.sourceforge.net/ 00:03:28.507 00:03:28.507 00:03:28.507 Suite: nvme_pcie_common 00:03:28.507 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-06-10 10:07:34.065626] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:28.507 passed 00:03:28.507 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:28.507 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:28.507 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:03:28.507 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-06-10 10:07:34.066003] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:28.507 [2024-06-10 10:07:34.066037] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:28.507 [2024-06-10 10:07:34.066057] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:28.507 passed 00:03:28.507 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:03:28.507 00:03:28.507 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.507 suites 1 1 n/a 0 0 00:03:28.507 tests 6 6 6 0 0 00:03:28.507 asserts 148 148 148 0 n/a 00:03:28.507 00:03:28.507 Elapsed time = 0.000 seconds 00:03:28.507 [2024-06-10 10:07:34.066229] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:28.507 [2024-06-10 10:07:34.066247] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:28.507 10:07:34 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:28.507 00:03:28.507 00:03:28.507 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.507 http://cunit.sourceforge.net/ 00:03:28.507 00:03:28.507 00:03:28.507 Suite: nvme_fabric 00:03:28.507 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:28.507 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:28.507 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:28.507 Test: test_nvme_fabric_discover_probe ...passed 00:03:28.507 Test: test_nvme_fabric_qpair_connect ...passed 00:03:28.507 00:03:28.507 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.507 suites 1 1 n/a 0 0 00:03:28.507 tests 5 5 5 0 0 00:03:28.507 asserts 60 60 60 0 n/a 00:03:28.507 00:03:28.507 Elapsed time = 0.000 seconds 00:03:28.507 [2024-06-10 10:07:34.071926] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:28.507 10:07:34 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:28.507 00:03:28.507 00:03:28.507 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.507 http://cunit.sourceforge.net/ 00:03:28.507 00:03:28.507 00:03:28.507 Suite: nvme_opal 00:03:28.507 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:28.507 Test: test_opal_add_short_atom_header ...passed 00:03:28.507 00:03:28.507 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.507 suites 1 1 n/a 0 0 00:03:28.507 tests 2 2 2 0 0 00:03:28.507 asserts 22 22 22 0 n/a 00:03:28.507 00:03:28.507 Elapsed time = 0.000 seconds 00:03:28.507 [2024-06-10 10:07:34.077748] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:28.507 00:03:28.507 real 0m0.430s 00:03:28.507 user 0m0.049s 00:03:28.507 sys 0m0.192s 00:03:28.507 ************************************ 00:03:28.507 END TEST unittest_nvme 00:03:28.507 ************************************ 00:03:28.507 10:07:34 unittest.unittest_nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:28.507 10:07:34 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:03:28.766 10:07:34 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:28.766 10:07:34 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:28.766 10:07:34 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:28.766 10:07:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:28.766 ************************************ 00:03:28.766 START TEST unittest_log 00:03:28.766 ************************************ 00:03:28.766 10:07:34 unittest.unittest_log -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:28.766 00:03:28.766 00:03:28.766 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.766 http://cunit.sourceforge.net/ 00:03:28.766 00:03:28.766 00:03:28.766 Suite: log 00:03:28.766 Test: log_test ...[2024-06-10 10:07:34.121507] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:03:28.766 [2024-06-10 10:07:34.121694] log_ut.c: 57:log_test: *DEBUG*: log test 00:03:28.766 log dump test: 00:03:28.766 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:28.766 spdk dump test: 00:03:28.766 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:28.766 spdk dump test: 00:03:28.766 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:28.766 passed 00:03:28.766 Test: deprecation ...00000010 65 20 63 68 61 72 73 e chars 00:03:29.700 passed 00:03:29.701 00:03:29.701 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.701 suites 1 1 n/a 0 0 00:03:29.701 tests 2 2 2 0 0 00:03:29.701 asserts 73 73 73 0 n/a 00:03:29.701 00:03:29.701 Elapsed time = 0.000 seconds 00:03:29.701 00:03:29.701 real 0m1.076s 00:03:29.701 user 0m0.005s 00:03:29.701 sys 0m0.004s 00:03:29.701 10:07:35 unittest.unittest_log -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.701 10:07:35 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:03:29.701 ************************************ 00:03:29.701 END TEST unittest_log 00:03:29.701 ************************************ 00:03:29.701 10:07:35 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:29.701 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.701 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.701 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:29.701 ************************************ 00:03:29.701 START TEST unittest_lvol 00:03:29.701 ************************************ 00:03:29.701 10:07:35 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:29.701 00:03:29.701 00:03:29.701 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.701 http://cunit.sourceforge.net/ 00:03:29.701 00:03:29.701 00:03:29.701 Suite: lvol 00:03:29.701 Test: lvs_init_unload_success ...passed 00:03:29.701 Test: lvs_init_destroy_success ...[2024-06-10 10:07:35.245458] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:29.701 [2024-06-10 10:07:35.245711] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:29.701 passed 00:03:29.701 Test: lvs_init_opts_success ...passed 00:03:29.701 Test: lvs_unload_lvs_is_null_fail ...passed 00:03:29.701 Test: lvs_names ...[2024-06-10 10:07:35.245746] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:29.701 passed 00:03:29.701 Test: lvol_create_destroy_success ...passed 00:03:29.701 Test: lvol_create_fail ...passed 00:03:29.701 Test: lvol_destroy_fail ...passed 00:03:29.701 Test: lvol_close ...passed 00:03:29.701 Test: lvol_resize ...passed 00:03:29.701 Test: lvol_set_read_only ...passed 00:03:29.701 Test: test_lvs_load ...[2024-06-10 10:07:35.245765] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:29.701 [2024-06-10 10:07:35.245779] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:29.701 [2024-06-10 10:07:35.245800] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:29.701 [2024-06-10 10:07:35.245866] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:29.701 [2024-06-10 10:07:35.245883] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:29.701 [2024-06-10 10:07:35.245917] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:29.701 [2024-06-10 10:07:35.245950] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:29.701 [2024-06-10 10:07:35.245963] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:29.701 passed 00:03:29.701 Test: lvols_load ...passed 00:03:29.701 Test: lvol_open ...passed 00:03:29.701 Test: lvol_snapshot ...passed 00:03:29.701 Test: lvol_snapshot_fail ...passed 00:03:29.701 Test: lvol_clone ...[2024-06-10 10:07:35.246026] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:29.701 [2024-06-10 10:07:35.246051] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:29.701 [2024-06-10 10:07:35.246080] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:29.701 [2024-06-10 10:07:35.246112] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:29.701 [2024-06-10 10:07:35.246208] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:29.701 passed 00:03:29.701 Test: lvol_clone_fail ...passed 00:03:29.701 Test: lvol_iter_clones ...passed 00:03:29.701 Test: lvol_refcnt ...passed 00:03:29.701 Test: lvol_names ...passed 00:03:29.701 Test: lvol_create_thin_provisioned ...passed 00:03:29.701 Test: lvol_rename ...[2024-06-10 10:07:35.246263] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:29.701 [2024-06-10 10:07:35.246312] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 425362ee-2711-11ef-b084-113036b5c18d because it is still open 00:03:29.701 [2024-06-10 10:07:35.246340] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:29.701 [2024-06-10 10:07:35.246358] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:29.701 [2024-06-10 10:07:35.246381] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:29.701 passed 00:03:29.701 Test: lvs_rename ...passed 00:03:29.701 Test: lvol_inflate ...passed 00:03:29.701 Test: lvol_decouple_parent ...passed 00:03:29.701 Test: lvol_get_xattr ...passed 00:03:29.701 Test: lvol_esnap_reload ...passed 00:03:29.701 Test: lvol_esnap_create_bad_args ...[2024-06-10 10:07:35.246427] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:29.701 [2024-06-10 10:07:35.246445] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:29.701 [2024-06-10 10:07:35.246475] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:29.701 [2024-06-10 10:07:35.246501] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:29.701 [2024-06-10 10:07:35.246532] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:29.701 passed 00:03:29.701 Test: lvol_esnap_create_delete ...passed 00:03:29.701 Test: lvol_esnap_load_esnaps ...passed 00:03:29.701 Test: lvol_esnap_missing ...[2024-06-10 10:07:35.246586] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:29.701 [2024-06-10 10:07:35.246600] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:29.701 [2024-06-10 10:07:35.246614] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:29.701 [2024-06-10 10:07:35.246631] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:29.701 [2024-06-10 10:07:35.246657] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:29.701 [2024-06-10 10:07:35.246699] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:29.701 passed 00:03:29.701 Test: lvol_esnap_hotplug ... 00:03:29.701 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:29.701 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:29.701 [2024-06-10 10:07:35.246726] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:29.701 [2024-06-10 10:07:35.246739] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:29.701 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:29.701 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:29.701 [2024-06-10 10:07:35.246813] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 42537672-2711-11ef-b084-113036b5c18d: failed to create esnap bs_dev: error -12 00:03:29.701 [2024-06-10 10:07:35.247041] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 42537efc-2711-11ef-b084-113036b5c18d: failed to create esnap bs_dev: error -12 00:03:29.701 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:29.701 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:29.701 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:29.701 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:29.701 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:29.701 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:29.701 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:29.701 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:29.701 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:29.701 passed 00:03:29.701 Test: lvol_get_by ...passed 00:03:29.701 Test: lvol_shallow_copy ...passed 00:03:29.701 Test: lvol_set_parent ...passed 00:03:29.701 Test: lvol_set_external_parent ...[2024-06-10 10:07:35.247097] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4253814f-2711-11ef-b084-113036b5c18d: failed to create esnap bs_dev: error -12 00:03:29.701 [2024-06-10 10:07:35.247323] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:29.701 [2024-06-10 10:07:35.247337] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 42538a64-2711-11ef-b084-113036b5c18d shallow copy, ext_dev must not be NULL 00:03:29.701 [2024-06-10 10:07:35.247370] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:03:29.701 [2024-06-10 10:07:35.247383] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:03:29.701 [2024-06-10 10:07:35.247408] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:03:29.701 [2024-06-10 10:07:35.247421] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:03:29.701 passed[2024-06-10 10:07:35.247433] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:03:29.701 00:03:29.701 00:03:29.701 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.701 suites 1 1 n/a 0 0 00:03:29.701 tests 37 37 37 0 0 00:03:29.701 asserts 1505 1505 1505 0 n/a 00:03:29.701 00:03:29.702 Elapsed time = 0.000 seconds 00:03:29.702 00:03:29.702 real 0m0.011s 00:03:29.702 user 0m0.000s 00:03:29.702 sys 0m0.015s 00:03:29.702 10:07:35 unittest.unittest_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.702 10:07:35 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:03:29.702 ************************************ 00:03:29.702 END TEST unittest_lvol 00:03:29.702 ************************************ 00:03:29.702 10:07:35 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:29.702 10:07:35 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:29.702 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.702 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.702 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:29.702 ************************************ 00:03:29.702 START TEST unittest_nvme_rdma 00:03:29.702 ************************************ 00:03:29.702 10:07:35 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:29.702 00:03:29.702 00:03:29.702 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.702 http://cunit.sourceforge.net/ 00:03:29.702 00:03:29.702 00:03:29.702 Suite: nvme_rdma 00:03:29.702 Test: test_nvme_rdma_build_sgl_request ...[2024-06-10 10:07:35.296050] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:29.702 [2024-06-10 10:07:35.296808] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:29.702 passed 00:03:29.702 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:29.702 Test: test_nvme_rdma_build_contig_request ...passed 00:03:29.702 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:29.702 Test: test_nvme_rdma_create_reqs ...[2024-06-10 10:07:35.296867] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1689:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:29.702 [2024-06-10 10:07:35.296912] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1570:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:29.702 [2024-06-10 10:07:35.296957] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:29.702 passed 00:03:29.702 Test: test_nvme_rdma_create_rsps ...passed 00:03:29.702 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:03:29.702 Test: test_nvme_rdma_poller_create ...passed 00:03:29.702 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:03:29.702 Test: test_nvme_rdma_ctrlr_construct ...[2024-06-10 10:07:35.297241] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:29.702 [2024-06-10 10:07:35.297273] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:29.702 [2024-06-10 10:07:35.297291] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1827:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:29.702 [2024-06-10 10:07:35.297316] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:29.702 passed 00:03:29.702 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:29.702 Test: test_nvme_rdma_req_init ...passed 00:03:29.702 Test: test_nvme_rdma_validate_cm_event ...[2024-06-10 10:07:35.297377] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:29.702 [2024-06-10 10:07:35.297388] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 624:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:29.702 passed 00:03:29.702 Test: test_nvme_rdma_qpair_init ...passed 00:03:29.702 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:29.702 Test: test_nvme_rdma_memory_domain ...passed 00:03:29.702 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:29.702 Test: test_rdma_get_memory_translation ...passed 00:03:29.702 Test: test_get_rdma_qpair_from_wc ...passed 00:03:29.702 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:29.702 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:03:29.702 Test: test_nvme_rdma_qpair_set_poller ...passed 00:03:29.702 00:03:29.702 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.702 suites 1 1 n/a 0 0 00:03:29.702 tests 22 22 22 0 0 00:03:29.702 asserts 412 412 412 0 n/a 00:03:29.702 00:03:29.702 Elapsed time = 0.000 seconds 00:03:29.702 [2024-06-10 10:07:35.297419] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:03:29.702 [2024-06-10 10:07:35.297435] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:29.702 [2024-06-10 10:07:35.297445] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:29.702 [2024-06-10 10:07:35.297468] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:29.702 [2024-06-10 10:07:35.297477] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:29.702 [2024-06-10 10:07:35.297498] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:29.702 [2024-06-10 10:07:35.297507] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:29.702 [2024-06-10 10:07:35.297515] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820728298 on poll group 0x82b5b4000 00:03:29.702 [2024-06-10 10:07:35.297524] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:29.702 [2024-06-10 10:07:35.297532] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:29.702 [2024-06-10 10:07:35.297541] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820728298 on poll group 0x82b5b4000 00:03:29.702 [2024-06-10 10:07:35.297597] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:29.963 00:03:29.963 real 0m0.007s 00:03:29.963 user 0m0.005s 00:03:29.963 sys 0m0.004s 00:03:29.963 10:07:35 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.963 10:07:35 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:29.963 ************************************ 00:03:29.963 END TEST unittest_nvme_rdma 00:03:29.963 ************************************ 00:03:29.963 10:07:35 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:29.963 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.963 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.963 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:29.963 ************************************ 00:03:29.963 START TEST unittest_nvmf_transport 00:03:29.963 ************************************ 00:03:29.963 10:07:35 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:29.963 00:03:29.963 00:03:29.963 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.963 http://cunit.sourceforge.net/ 00:03:29.963 00:03:29.963 00:03:29.963 Suite: nvmf 00:03:29.963 Test: test_spdk_nvmf_transport_create ...[2024-06-10 10:07:35.337177] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:29.963 [2024-06-10 10:07:35.337488] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:29.963 [2024-06-10 10:07:35.337522] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:29.963 passed 00:03:29.963 Test: test_nvmf_transport_poll_group_create ...passed 00:03:29.963 Test: test_spdk_nvmf_transport_opts_init ...passed 00:03:29.963 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:29.963 00:03:29.963 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.963 suites 1 1 n/a 0 0 00:03:29.963 tests 4 4 4 0 0 00:03:29.963 asserts 49 49 49 0 n/a 00:03:29.963 00:03:29.963 Elapsed time = 0.000 seconds 00:03:29.963 [2024-06-10 10:07:35.337569] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:29.963 [2024-06-10 10:07:35.337615] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:29.963 [2024-06-10 10:07:35.337632] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:29.963 [2024-06-10 10:07:35.337647] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:29.963 00:03:29.963 real 0m0.007s 00:03:29.963 user 0m0.000s 00:03:29.963 sys 0m0.008s 00:03:29.963 10:07:35 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.963 10:07:35 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:03:29.963 ************************************ 00:03:29.963 END TEST unittest_nvmf_transport 00:03:29.963 ************************************ 00:03:29.963 10:07:35 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:29.963 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.963 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.963 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:29.963 ************************************ 00:03:29.963 START TEST unittest_rdma 00:03:29.963 ************************************ 00:03:29.963 10:07:35 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:29.963 00:03:29.963 00:03:29.963 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.963 http://cunit.sourceforge.net/ 00:03:29.963 00:03:29.963 00:03:29.963 Suite: rdma_common 00:03:29.963 Test: test_spdk_rdma_pd ...[2024-06-10 10:07:35.382255] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:29.963 [2024-06-10 10:07:35.382488] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:29.963 passed 00:03:29.963 00:03:29.963 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.963 suites 1 1 n/a 0 0 00:03:29.963 tests 1 1 1 0 0 00:03:29.963 asserts 31 31 31 0 n/a 00:03:29.963 00:03:29.963 Elapsed time = 0.000 seconds 00:03:29.963 00:03:29.963 real 0m0.006s 00:03:29.963 user 0m0.005s 00:03:29.963 sys 0m0.005s 00:03:29.963 10:07:35 unittest.unittest_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.963 ************************************ 00:03:29.963 END TEST unittest_rdma 00:03:29.963 ************************************ 00:03:29.963 10:07:35 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:29.964 10:07:35 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:29.964 10:07:35 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:03:29.964 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.964 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.964 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:29.964 ************************************ 00:03:29.964 START TEST unittest_nvmf 00:03:29.964 ************************************ 00:03:29.964 10:07:35 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # unittest_nvmf 00:03:29.964 10:07:35 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:29.964 00:03:29.964 00:03:29.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.964 http://cunit.sourceforge.net/ 00:03:29.964 00:03:29.964 00:03:29.964 Suite: nvmf 00:03:29.964 Test: test_get_log_page ...[2024-06-10 10:07:35.430282] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2614:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:29.964 passed 00:03:29.964 Test: test_process_fabrics_cmd ...passed 00:03:29.964 Test: test_connect ...[2024-06-10 10:07:35.430508] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4684:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:03:29.964 [2024-06-10 10:07:35.430630] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1008:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:29.964 [2024-06-10 10:07:35.430661] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 871:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:29.964 [2024-06-10 10:07:35.430680] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1047:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:29.964 [2024-06-10 10:07:35.430703] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:29.964 [2024-06-10 10:07:35.430723] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 882:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:29.964 [2024-06-10 10:07:35.430742] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 890:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:29.964 [2024-06-10 10:07:35.430766] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 896:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:29.964 [2024-06-10 10:07:35.430788] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 922:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:29.964 [2024-06-10 10:07:35.430817] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 757:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:29.964 [2024-06-10 10:07:35.430843] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 672:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:29.964 [2024-06-10 10:07:35.430902] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 678:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:29.964 [2024-06-10 10:07:35.430937] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 685:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:29.964 [2024-06-10 10:07:35.430966] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 692:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:29.964 [2024-06-10 10:07:35.431000] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 716:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:29.964 [2024-06-10 10:07:35.431038] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:03:29.964 [2024-06-10 10:07:35.431083] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 802:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:03:29.964 [2024-06-10 10:07:35.431117] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 802:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:03:29.964 passed 00:03:29.964 Test: test_get_ns_id_desc_list ...passed 00:03:29.964 Test: test_identify_ns ...[2024-06-10 10:07:35.431205] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:29.964 passed 00:03:29.964 Test: test_identify_ns_iocs_specific ...[2024-06-10 10:07:35.431305] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:29.964 [2024-06-10 10:07:35.431360] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:29.964 [2024-06-10 10:07:35.431422] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:29.964 passed 00:03:29.964 Test: test_reservation_write_exclusive ...[2024-06-10 10:07:35.431533] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2708:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:29.964 passed 00:03:29.964 Test: test_reservation_exclusive_access ...passed 00:03:29.964 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:29.964 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:29.964 Test: test_reservation_notification_log_page ...passed 00:03:29.964 Test: test_get_dif_ctx ...passed 00:03:29.964 Test: test_set_get_features ...[2024-06-10 10:07:35.431704] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1644:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:29.964 passed 00:03:29.964 Test: test_identify_ctrlr ...passed 00:03:29.964 Test: test_identify_ctrlr_iocs_specific ...[2024-06-10 10:07:35.431753] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1644:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:29.964 [2024-06-10 10:07:35.431777] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1655:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:29.964 [2024-06-10 10:07:35.431805] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1731:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:29.964 passed 00:03:29.964 Test: test_custom_admin_cmd ...passed 00:03:29.964 Test: test_fused_compare_and_write ...[2024-06-10 10:07:35.431983] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4216:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:29.964 [2024-06-10 10:07:35.432016] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4205:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:29.964 passed 00:03:29.964 Test: test_multi_async_event_reqs ...passed 00:03:29.964 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:29.964 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:03:29.964 Test: test_multi_async_events ...passed 00:03:29.964 Test: test_rae ...passed 00:03:29.964 Test: test_nvmf_ctrlr_create_destruct ...passed[2024-06-10 10:07:35.432044] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4223:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:29.964 00:03:29.964 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:29.964 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:03:29.964 Test: test_zcopy_read ...[2024-06-10 10:07:35.432185] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4684:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:03:29.964 [2024-06-10 10:07:35.432217] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4710:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:03:29.964 passed 00:03:29.964 Test: test_zcopy_write ...passed 00:03:29.964 Test: test_nvmf_property_set ...passed 00:03:29.964 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:03:29.964 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-06-10 10:07:35.432284] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1942:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:29.964 [2024-06-10 10:07:35.432314] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1942:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:29.964 [2024-06-10 10:07:35.432347] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1965:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:29.964 [2024-06-10 10:07:35.432372] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1971:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:29.964 passed 00:03:29.964 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:03:29.964 Test: test_nvmf_check_qpair_active ...[2024-06-10 10:07:35.432400] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1983:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:29.964 [2024-06-10 10:07:35.432455] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4684:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:03:29.964 [2024-06-10 10:07:35.432482] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4698:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:03:29.964 [2024-06-10 10:07:35.432510] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4710:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:03:29.964 passed 00:03:29.964 00:03:29.964 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.964 suites 1 1 n/a 0 0 00:03:29.964 tests 32 32 32 0 0 00:03:29.964 asserts 977 977 977 0 n/a 00:03:29.964 00:03:29.964 Elapsed time = 0.000 seconds[2024-06-10 10:07:35.432541] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4710:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:03:29.964 [2024-06-10 10:07:35.432567] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4710:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:03:29.964 00:03:29.964 10:07:35 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:29.964 00:03:29.964 00:03:29.964 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.964 http://cunit.sourceforge.net/ 00:03:29.964 00:03:29.964 00:03:29.964 Suite: nvmf 00:03:29.964 Test: test_get_rw_params ...passed 00:03:29.964 Test: test_get_rw_ext_params ...passed 00:03:29.964 Test: test_lba_in_range ...passed 00:03:29.964 Test: test_get_dif_ctx ...passed 00:03:29.964 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:29.964 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-06-10 10:07:35.440453] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:29.964 passed 00:03:29.964 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:03:29.964 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:03:29.964 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:29.964 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-06-10 10:07:35.440634] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:29.964 [2024-06-10 10:07:35.440649] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:29.964 [2024-06-10 10:07:35.440676] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:29.964 [2024-06-10 10:07:35.440696] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:29.964 [2024-06-10 10:07:35.440724] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:29.965 [2024-06-10 10:07:35.440738] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:29.965 [2024-06-10 10:07:35.440754] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:29.965 [2024-06-10 10:07:35.440763] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:29.965 passed 00:03:29.965 00:03:29.965 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.965 suites 1 1 n/a 0 0 00:03:29.965 tests 10 10 10 0 0 00:03:29.965 asserts 159 159 159 0 n/a 00:03:29.965 00:03:29.965 Elapsed time = 0.000 seconds 00:03:29.965 10:07:35 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:29.965 00:03:29.965 00:03:29.965 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.965 http://cunit.sourceforge.net/ 00:03:29.965 00:03:29.965 00:03:29.965 Suite: nvmf 00:03:29.965 Test: test_discovery_log ...passed 00:03:29.965 Test: test_discovery_log_with_filters ...passed 00:03:29.965 00:03:29.965 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.965 suites 1 1 n/a 0 0 00:03:29.965 tests 2 2 2 0 0 00:03:29.965 asserts 238 238 238 0 n/a 00:03:29.965 00:03:29.965 Elapsed time = 0.000 seconds 00:03:29.965 10:07:35 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:29.965 00:03:29.965 00:03:29.965 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.965 http://cunit.sourceforge.net/ 00:03:29.965 00:03:29.965 00:03:29.965 Suite: nvmf 00:03:29.965 Test: nvmf_test_create_subsystem ...[2024-06-10 10:07:35.451425] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:29.965 [2024-06-10 10:07:35.451652] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:03:29.965 [2024-06-10 10:07:35.451680] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:29.965 [2024-06-10 10:07:35.451696] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:03:29.965 [2024-06-10 10:07:35.451731] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:29.965 [2024-06-10 10:07:35.451745] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:03:29.965 [2024-06-10 10:07:35.451759] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:29.965 [2024-06-10 10:07:35.451772] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:03:29.965 [2024-06-10 10:07:35.451786] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:29.965 [2024-06-10 10:07:35.451800] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:03:29.965 [2024-06-10 10:07:35.451813] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:29.965 [2024-06-10 10:07:35.451827] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:03:29.965 [2024-06-10 10:07:35.451849] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:29.965 [2024-06-10 10:07:35.451863] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:03:29.965 passed 00:03:29.965 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:03:29.965 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:03:29.965 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:29.965 Test: test_spdk_nvmf_ns_visible ...passed 00:03:29.965 Test: test_reservation_register ...passed 00:03:29.965 Test: test_reservation_register_with_ptpl ...[2024-06-10 10:07:35.451905] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:29.965 [2024-06-10 10:07:35.451919] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:03:29.965 [2024-06-10 10:07:35.451936] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:29.965 [2024-06-10 10:07:35.451950] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:03:29.965 [2024-06-10 10:07:35.451964] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:29.965 [2024-06-10 10:07:35.451977] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:29.965 [2024-06-10 10:07:35.451992] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:29.965 [2024-06-10 10:07:35.452005] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:29.965 [2024-06-10 10:07:35.452073] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:29.965 [2024-06-10 10:07:35.452089] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2010:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:29.965 [2024-06-10 10:07:35.452123] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2142:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:03:29.965 [2024-06-10 10:07:35.452155] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:03:29.965 [2024-06-10 10:07:35.452248] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452269] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3138:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:29.965 passed 00:03:29.965 Test: test_reservation_acquire_preempt_1 ...passed 00:03:29.965 Test: test_reservation_acquire_release_with_ptpl ...passed 00:03:29.965 Test: test_reservation_release ...[2024-06-10 10:07:35.452481] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452668] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 passed 00:03:29.965 Test: test_reservation_unregister_notification ...passed 00:03:29.965 Test: test_reservation_release_notification ...passed 00:03:29.965 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:29.965 Test: test_reservation_clear_notification ...passed 00:03:29.965 Test: test_reservation_preempt_notification ...passed 00:03:29.965 Test: test_spdk_nvmf_ns_event ...passed 00:03:29.965 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:29.965 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:29.965 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:03:29.965 Test: test_nvmf_ns_reservation_report ...passed 00:03:29.965 Test: test_nvmf_nqn_is_valid ...passed 00:03:29.965 Test: test_nvmf_ns_reservation_restore ...passed 00:03:29.965 Test: test_nvmf_subsystem_state_change ...passed 00:03:29.965 Test: test_nvmf_reservation_custom_ops ...passed 00:03:29.965 00:03:29.965 [2024-06-10 10:07:35.452706] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452732] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452756] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452779] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452804] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3082:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:29.965 [2024-06-10 10:07:35.452965] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:29.965 [2024-06-10 10:07:35.452989] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:03:29.965 [2024-06-10 10:07:35.453012] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3444:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:29.965 [2024-06-10 10:07:35.453052] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:29.965 [2024-06-10 10:07:35.453067] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:4272ee29-2711-11ef-b084-113036b5c18": uuid is not the correct length 00:03:29.965 [2024-06-10 10:07:35.453081] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:29.965 [2024-06-10 10:07:35.453120] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2637:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:29.965 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.965 suites 1 1 n/a 0 0 00:03:29.965 tests 24 24 24 0 0 00:03:29.965 asserts 499 499 499 0 n/a 00:03:29.965 00:03:29.965 Elapsed time = 0.000 seconds 00:03:29.966 10:07:35 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:29.966 00:03:29.966 00:03:29.966 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.966 http://cunit.sourceforge.net/ 00:03:29.966 00:03:29.966 00:03:29.966 Suite: nvmf 00:03:29.966 Test: test_nvmf_tcp_create ...[2024-06-10 10:07:35.464184] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:29.966 passed 00:03:29.966 Test: test_nvmf_tcp_destroy ...passed 00:03:29.966 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:29.966 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:29.966 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:29.966 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:29.966 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:29.966 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-06-10 10:07:35.476528] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 passed 00:03:29.966 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:29.966 Test: test_nvmf_tcp_icreq_handle ...passed 00:03:29.966 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:29.966 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:29.966 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-06-10 10:07:35.476564] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476589] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476612] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.476630] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476679] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2117:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:29.966 [2024-06-10 10:07:35.476703] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.476723] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117570 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476745] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2117:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:29.966 [2024-06-10 10:07:35.476761] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117570 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476784] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.476802] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117570 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476825] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.476844] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117570 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476884] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2513:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:29.966 [2024-06-10 10:07:35.476906] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.476927] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117570 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476949] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2244:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x821116df8 00:03:29.966 [2024-06-10 10:07:35.476962] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.476975] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.476990] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2303:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x821117668 00:03:29.966 [2024-06-10 10:07:35.477011] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477031] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477051] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2254:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:29.966 [2024-06-10 10:07:35.477072] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477092] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477115] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2293:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:29.966 [2024-06-10 10:07:35.477135] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477153] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477174] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477194] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477218] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477230] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477245] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 passed 00:03:29.966 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-06-10 10:07:35.477257] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477272] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477291] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477312] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477334] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 [2024-06-10 10:07:35.477354] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:29.966 [2024-06-10 10:07:35.477377] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1603:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x821117668 is same with the state(5) to be set 00:03:29.966 passed 00:03:29.966 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-06-10 10:07:35.484695] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:29.966 [2024-06-10 10:07:35.484727] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:29.966 passed 00:03:29.966 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-06-10 10:07:35.484899] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:29.966 [2024-06-10 10:07:35.484915] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:29.966 passed 00:03:29.966 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:03:29.966 00:03:29.966 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.966 suites 1 1 n/a 0 0 00:03:29.966 tests 17 17 17 0 0 00:03:29.966 asserts 222 222 222 0 n/a 00:03:29.966 00:03:29.966 Elapsed time = 0.023 seconds 00:03:29.966 [2024-06-10 10:07:35.485003] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:29.966 [2024-06-10 10:07:35.485016] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:29.966 10:07:35 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:29.966 00:03:29.966 00:03:29.966 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.966 http://cunit.sourceforge.net/ 00:03:29.966 00:03:29.966 00:03:29.966 Suite: nvmf 00:03:29.966 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:29.966 00:03:29.966 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.966 suites 1 1 n/a 0 0 00:03:29.966 tests 1 1 1 0 0 00:03:29.966 asserts 17 17 17 0 n/a 00:03:29.966 00:03:29.966 Elapsed time = 0.000 seconds 00:03:29.966 00:03:29.966 real 0m0.071s 00:03:29.966 user 0m0.029s 00:03:29.966 sys 0m0.042s 00:03:29.966 10:07:35 unittest.unittest_nvmf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.966 10:07:35 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:03:29.966 ************************************ 00:03:29.966 END TEST unittest_nvmf 00:03:29.966 ************************************ 00:03:29.966 10:07:35 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:29.966 10:07:35 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:29.966 10:07:35 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:29.966 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:29.966 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:29.966 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:29.966 ************************************ 00:03:29.966 START TEST unittest_nvmf_rdma 00:03:29.966 ************************************ 00:03:29.966 10:07:35 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:29.966 00:03:29.966 00:03:29.966 CUnit - A unit testing framework for C - Version 2.1-3 00:03:29.966 http://cunit.sourceforge.net/ 00:03:29.967 00:03:29.967 00:03:29.967 Suite: nvmf 00:03:29.967 Test: test_spdk_nvmf_rdma_request_parse_sgl ...passed 00:03:29.967 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:29.967 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:29.967 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:29.967 Test: test_nvmf_rdma_opts_init ...passed 00:03:29.967 Test: test_nvmf_rdma_request_free_data ...passed 00:03:29.967 Test: test_nvmf_rdma_resources_create ...[2024-06-10 10:07:35.543828] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1859:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:29.967 [2024-06-10 10:07:35.544020] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1909:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:29.967 [2024-06-10 10:07:35.544035] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1909:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:29.967 passed 00:03:29.967 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:29.967 Test: test_nvmf_rdma_resize_cq ...[2024-06-10 10:07:35.544752] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 950:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:29.967 Using CQ of insufficient size may lead to CQ overrun 00:03:29.967 [2024-06-10 10:07:35.544767] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:29.967 passed 00:03:29.967 00:03:29.967 Run Summary: Type Total Ran Passed Failed Inactive 00:03:29.967 suites 1 1 n/a 0 0 00:03:29.967 tests 9 9 9 0 0 00:03:29.967 asserts 579 579 579 0 n/a 00:03:29.967 00:03:29.967 Elapsed time = 0.000 seconds 00:03:29.967 [2024-06-10 10:07:35.544804] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:29.967 00:03:29.967 real 0m0.006s 00:03:29.967 user 0m0.000s 00:03:29.967 sys 0m0.008s 00:03:29.967 10:07:35 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:29.967 ************************************ 00:03:29.967 END TEST unittest_nvmf_rdma 00:03:29.967 ************************************ 00:03:29.967 10:07:35 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:30.229 10:07:35 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:30.229 10:07:35 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:03:30.229 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.229 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.229 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.229 ************************************ 00:03:30.229 START TEST unittest_scsi 00:03:30.229 ************************************ 00:03:30.229 10:07:35 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # unittest_scsi 00:03:30.229 10:07:35 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:30.229 00:03:30.229 00:03:30.229 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.229 http://cunit.sourceforge.net/ 00:03:30.229 00:03:30.229 00:03:30.229 Suite: dev_suite 00:03:30.229 Test: dev_destruct_null_dev ...passed 00:03:30.229 Test: dev_destruct_zero_luns ...passed 00:03:30.229 Test: dev_destruct_null_lun ...passed 00:03:30.229 Test: dev_destruct_success ...passed 00:03:30.229 Test: dev_construct_num_luns_zero ...passed 00:03:30.229 Test: dev_construct_no_lun_zero ...passed 00:03:30.229 Test: dev_construct_null_lun ...passed 00:03:30.229 Test: dev_construct_name_too_long ...passed 00:03:30.229 Test: dev_construct_success ...passed 00:03:30.229 Test: dev_construct_success_lun_zero_not_first ...passed 00:03:30.229 Test: dev_queue_mgmt_task_success ...passed 00:03:30.229 Test: dev_queue_task_success ...passed 00:03:30.229 Test: dev_stop_success ...passed 00:03:30.229 Test: dev_add_port_max_ports ...passed 00:03:30.229 Test: dev_add_port_construct_failure1 ...passed 00:03:30.229 Test: dev_add_port_construct_failure2 ...[2024-06-10 10:07:35.585417] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:30.229 [2024-06-10 10:07:35.585599] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:30.229 [2024-06-10 10:07:35.585622] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:30.230 [2024-06-10 10:07:35.585644] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:30.230 [2024-06-10 10:07:35.585699] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:30.230 [2024-06-10 10:07:35.585733] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:30.230 passed 00:03:30.230 Test: dev_add_port_success1 ...passed 00:03:30.230 Test: dev_add_port_success2 ...passed 00:03:30.230 Test: dev_add_port_success3 ...passed 00:03:30.230 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:30.230 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:30.230 Test: dev_find_port_by_id_success ...passed 00:03:30.230 Test: dev_add_lun_bdev_not_found ...[2024-06-10 10:07:35.585757] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:30.230 passed 00:03:30.230 Test: dev_add_lun_no_free_lun_id ...passed 00:03:30.230 Test: dev_add_lun_success1 ...passed 00:03:30.230 Test: dev_add_lun_success2 ...passed 00:03:30.230 Test: dev_check_pending_tasks ...passed 00:03:30.230 Test: dev_iterate_luns ...passed 00:03:30.230 Test: dev_find_free_lun ...passed 00:03:30.230 00:03:30.230 [2024-06-10 10:07:35.586045] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:30.230 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.230 suites 1 1 n/a 0 0 00:03:30.230 tests 29 29 29 0 0 00:03:30.230 asserts 97 97 97 0 n/a 00:03:30.230 00:03:30.230 Elapsed time = 0.000 seconds 00:03:30.230 10:07:35 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:30.230 00:03:30.230 00:03:30.230 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.230 http://cunit.sourceforge.net/ 00:03:30.230 00:03:30.230 00:03:30.230 Suite: lun_suite 00:03:30.230 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-06-10 10:07:35.591927] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:30.230 passed 00:03:30.230 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:03:30.230 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:30.230 Test: lun_task_mgmt_execute_target_reset ...passed[2024-06-10 10:07:35.592084] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:30.230 00:03:30.230 Test: lun_task_mgmt_execute_invalid_case ...passed 00:03:30.230 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:30.230 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:30.230 Test: lun_append_task_null_lun_not_supported ...passed 00:03:30.230 Test: lun_execute_scsi_task_pending ...passed 00:03:30.230 Test: lun_execute_scsi_task_complete ...passed 00:03:30.230 Test: lun_execute_scsi_task_resize ...passed 00:03:30.230 Test: lun_destruct_success ...passed 00:03:30.230 Test: lun_construct_null_ctx ...passed 00:03:30.230 Test: lun_construct_success ...passed 00:03:30.230 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:30.230 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:30.230 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:30.230 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:30.230 00:03:30.230 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.230 suites 1 1 n/a 0 0 00:03:30.230 tests 18 18 18 0 0 00:03:30.230 asserts 153 153 153 0 n/a 00:03:30.230 00:03:30.230 Elapsed time = 0.000 seconds 00:03:30.230 [2024-06-10 10:07:35.592122] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:30.230 [2024-06-10 10:07:35.592153] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:30.230 10:07:35 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:30.230 00:03:30.230 00:03:30.230 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.230 http://cunit.sourceforge.net/ 00:03:30.230 00:03:30.230 00:03:30.230 Suite: scsi_suite 00:03:30.230 Test: scsi_init ...passed 00:03:30.230 00:03:30.230 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.230 suites 1 1 n/a 0 0 00:03:30.230 tests 1 1 1 0 0 00:03:30.230 asserts 1 1 1 0 n/a 00:03:30.230 00:03:30.230 Elapsed time = 0.000 seconds 00:03:30.230 10:07:35 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:30.230 00:03:30.230 00:03:30.230 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.230 http://cunit.sourceforge.net/ 00:03:30.230 00:03:30.230 00:03:30.230 Suite: translation_suite 00:03:30.230 Test: mode_select_6_test ...passed 00:03:30.230 Test: mode_select_6_test2 ...passed 00:03:30.230 Test: mode_sense_6_test ...passed 00:03:30.230 Test: mode_sense_10_test ...passed 00:03:30.230 Test: inquiry_evpd_test ...passed 00:03:30.230 Test: inquiry_standard_test ...passed 00:03:30.230 Test: inquiry_overflow_test ...passed 00:03:30.230 Test: task_complete_test ...passed 00:03:30.230 Test: lba_range_test ...passed 00:03:30.230 Test: xfer_len_test ...passed 00:03:30.230 Test: xfer_test ...passed 00:03:30.230 Test: scsi_name_padding_test ...passed 00:03:30.230 Test: get_dif_ctx_test ...passed 00:03:30.230 Test: unmap_split_test ...passed 00:03:30.230 00:03:30.230 [2024-06-10 10:07:35.601875] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:30.230 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.230 suites 1 1 n/a 0 0 00:03:30.230 tests 14 14 14 0 0 00:03:30.230 asserts 1205 1205 1205 0 n/a 00:03:30.230 00:03:30.230 Elapsed time = 0.000 seconds 00:03:30.230 10:07:35 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:30.230 00:03:30.230 00:03:30.230 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.230 http://cunit.sourceforge.net/ 00:03:30.230 00:03:30.230 00:03:30.230 Suite: reservation_suite 00:03:30.230 Test: test_reservation_register ...passed 00:03:30.230 Test: test_reservation_reserve ...[2024-06-10 10:07:35.608265] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:30.230 [2024-06-10 10:07:35.608530] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:30.230 [2024-06-10 10:07:35.608555] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:30.230 [2024-06-10 10:07:35.608574] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:30.230 passed 00:03:30.230 Test: test_reservation_preempt_non_all_regs ...passed 00:03:30.230 Test: test_reservation_preempt_all_regs ...[2024-06-10 10:07:35.608598] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:30.230 [2024-06-10 10:07:35.608615] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:30.230 passed 00:03:30.230 Test: test_reservation_cmds_conflict ...passed 00:03:30.230 Test: test_scsi2_reserve_release ...passed 00:03:30.230 Test: test_pr_with_scsi2_reserve_release ...passed 00:03:30.230 00:03:30.230 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.230 suites 1 1 n/a 0 0 00:03:30.230 tests 7 7 7 0 0 00:03:30.230 asserts 257 257 257 0 n/a 00:03:30.230 00:03:30.230 Elapsed time = 0.000 seconds 00:03:30.230 [2024-06-10 10:07:35.608650] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:30.230 [2024-06-10 10:07:35.608676] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:30.230 [2024-06-10 10:07:35.608693] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:30.230 [2024-06-10 10:07:35.608709] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:30.230 [2024-06-10 10:07:35.608723] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:30.230 [2024-06-10 10:07:35.608737] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:30.230 [2024-06-10 10:07:35.608751] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:30.230 [2024-06-10 10:07:35.608783] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:30.230 00:03:30.230 real 0m0.029s 00:03:30.230 user 0m0.022s 00:03:30.230 sys 0m0.021s 00:03:30.230 10:07:35 unittest.unittest_scsi -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.230 10:07:35 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:03:30.230 ************************************ 00:03:30.230 END TEST unittest_scsi 00:03:30.230 ************************************ 00:03:30.230 10:07:35 unittest -- unit/unittest.sh@278 -- # uname -s 00:03:30.230 10:07:35 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:03:30.230 10:07:35 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:30.230 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.230 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.230 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.230 ************************************ 00:03:30.230 START TEST unittest_thread 00:03:30.230 ************************************ 00:03:30.230 10:07:35 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:30.230 00:03:30.230 00:03:30.230 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.230 http://cunit.sourceforge.net/ 00:03:30.230 00:03:30.231 00:03:30.231 Suite: io_channel 00:03:30.231 Test: thread_alloc ...passed 00:03:30.231 Test: thread_send_msg ...passed 00:03:30.231 Test: thread_poller ...passed 00:03:30.231 Test: poller_pause ...passed 00:03:30.231 Test: thread_for_each ...passed 00:03:30.231 Test: for_each_channel_remove ...passed 00:03:30.231 Test: for_each_channel_unreg ...[2024-06-10 10:07:35.655003] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2174:spdk_io_device_register: *ERROR*: io_device 0x820dbc5f4 already registered (old:0x82bb86000 new:0x82bb86180) 00:03:30.231 passed 00:03:30.231 Test: thread_name ...passed 00:03:30.231 Test: channel ...[2024-06-10 10:07:35.655460] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x2287b8 00:03:30.231 passed 00:03:30.231 Test: channel_destroy_races ...passed 00:03:30.231 Test: thread_exit_test ...passed 00:03:30.231 Test: thread_update_stats_test ...passed 00:03:30.231 Test: nested_channel ...passed 00:03:30.231 Test: device_unregister_and_thread_exit_race ...passed 00:03:30.231 Test: cache_closest_timed_poller ...passed 00:03:30.231 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:30.231 Test: io_device_lookup ...passed 00:03:30.231 Test: spdk_spin ...[2024-06-10 10:07:35.655867] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 636:thread_exit: *ERROR*: thread 0x82bb4ba80 got timeout, and move it to the exited state forcefully 00:03:30.231 [2024-06-10 10:07:35.656729] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:30.231 [2024-06-10 10:07:35.656742] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820dbc5f0 00:03:30.231 [2024-06-10 10:07:35.656752] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:30.231 [2024-06-10 10:07:35.656900] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:30.231 passed 00:03:30.231 Test: for_each_channel_and_thread_exit_race ...[2024-06-10 10:07:35.656915] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820dbc5f0 00:03:30.231 [2024-06-10 10:07:35.656929] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:30.231 [2024-06-10 10:07:35.656942] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820dbc5f0 00:03:30.231 [2024-06-10 10:07:35.656956] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:30.231 [2024-06-10 10:07:35.656973] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820dbc5f0 00:03:30.231 [2024-06-10 10:07:35.656985] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:30.231 [2024-06-10 10:07:35.656997] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x820dbc5f0 00:03:30.231 passed 00:03:30.231 Test: for_each_thread_and_thread_exit_race ...passed 00:03:30.231 00:03:30.231 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.231 suites 1 1 n/a 0 0 00:03:30.231 tests 20 20 20 0 0 00:03:30.231 asserts 409 409 409 0 n/a 00:03:30.231 00:03:30.231 Elapsed time = 0.008 seconds 00:03:30.231 00:03:30.231 real 0m0.010s 00:03:30.231 user 0m0.000s 00:03:30.231 sys 0m0.009s 00:03:30.231 10:07:35 unittest.unittest_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.231 ************************************ 00:03:30.231 END TEST unittest_thread 00:03:30.231 ************************************ 00:03:30.231 10:07:35 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:03:30.231 10:07:35 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:30.231 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.231 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.231 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.231 ************************************ 00:03:30.231 START TEST unittest_iobuf 00:03:30.231 ************************************ 00:03:30.231 10:07:35 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:30.231 00:03:30.231 00:03:30.231 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.231 http://cunit.sourceforge.net/ 00:03:30.231 00:03:30.231 00:03:30.231 Suite: io_channel 00:03:30.231 Test: iobuf ...passed 00:03:30.231 Test: iobuf_cache ...[2024-06-10 10:07:35.697524] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:30.231 passed 00:03:30.231 00:03:30.231 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.231 suites 1 1 n/a 0 0 00:03:30.231 tests 2 2 2 0 0 00:03:30.231 asserts 107 107 107 0 n/a 00:03:30.231 00:03:30.231 Elapsed time = 0.000 seconds 00:03:30.231 [2024-06-10 10:07:35.697700] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:30.231 [2024-06-10 10:07:35.697730] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:30.231 [2024-06-10 10:07:35.697741] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:30.231 [2024-06-10 10:07:35.697754] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:30.231 [2024-06-10 10:07:35.697764] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:30.231 00:03:30.231 real 0m0.005s 00:03:30.231 user 0m0.000s 00:03:30.231 sys 0m0.008s 00:03:30.231 10:07:35 unittest.unittest_iobuf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.231 ************************************ 00:03:30.231 END TEST unittest_iobuf 00:03:30.231 ************************************ 00:03:30.231 10:07:35 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:03:30.231 10:07:35 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:03:30.231 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.231 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.231 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.231 ************************************ 00:03:30.231 START TEST unittest_util 00:03:30.231 ************************************ 00:03:30.231 10:07:35 unittest.unittest_util -- common/autotest_common.sh@1124 -- # unittest_util 00:03:30.231 10:07:35 unittest.unittest_util -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:30.231 00:03:30.231 00:03:30.231 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.231 http://cunit.sourceforge.net/ 00:03:30.231 00:03:30.231 00:03:30.231 Suite: base64 00:03:30.231 Test: test_base64_get_encoded_strlen ...passed 00:03:30.231 Test: test_base64_get_decoded_len ...passed 00:03:30.231 Test: test_base64_encode ...passed 00:03:30.231 Test: test_base64_decode ...passed 00:03:30.231 Test: test_base64_urlsafe_encode ...passed 00:03:30.231 Test: test_base64_urlsafe_decode ...passed 00:03:30.231 00:03:30.231 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.231 suites 1 1 n/a 0 0 00:03:30.231 tests 6 6 6 0 0 00:03:30.231 asserts 112 112 112 0 n/a 00:03:30.231 00:03:30.231 Elapsed time = 0.000 seconds 00:03:30.231 10:07:35 unittest.unittest_util -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:30.231 00:03:30.231 00:03:30.231 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.231 http://cunit.sourceforge.net/ 00:03:30.231 00:03:30.231 00:03:30.231 Suite: bit_array 00:03:30.231 Test: test_1bit ...passed 00:03:30.231 Test: test_64bit ...passed 00:03:30.231 Test: test_find ...passed 00:03:30.231 Test: test_resize ...passed 00:03:30.231 Test: test_errors ...passed 00:03:30.231 Test: test_count ...passed 00:03:30.231 Test: test_mask_store_load ...passed 00:03:30.231 Test: test_mask_clear ...passed 00:03:30.231 00:03:30.231 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.231 suites 1 1 n/a 0 0 00:03:30.231 tests 8 8 8 0 0 00:03:30.231 asserts 5075 5075 5075 0 n/a 00:03:30.231 00:03:30.231 Elapsed time = 0.000 seconds 00:03:30.231 10:07:35 unittest.unittest_util -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:30.231 00:03:30.231 00:03:30.231 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.231 http://cunit.sourceforge.net/ 00:03:30.231 00:03:30.231 00:03:30.231 Suite: cpuset 00:03:30.231 Test: test_cpuset ...passed 00:03:30.231 Test: test_cpuset_parse ...[2024-06-10 10:07:35.750921] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:30.231 [2024-06-10 10:07:35.751191] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:30.231 [2024-06-10 10:07:35.751206] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:30.231 [2024-06-10 10:07:35.751217] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:30.231 [2024-06-10 10:07:35.751227] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:30.231 [2024-06-10 10:07:35.751237] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:30.231 [2024-06-10 10:07:35.751246] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:30.232 [2024-06-10 10:07:35.751256] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:30.232 passed 00:03:30.232 Test: test_cpuset_fmt ...passed 00:03:30.232 00:03:30.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.232 suites 1 1 n/a 0 0 00:03:30.232 tests 3 3 3 0 0 00:03:30.232 asserts 65 65 65 0 n/a 00:03:30.232 00:03:30.232 Elapsed time = 0.000 seconds 00:03:30.232 10:07:35 unittest.unittest_util -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:30.232 00:03:30.232 00:03:30.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.232 http://cunit.sourceforge.net/ 00:03:30.232 00:03:30.232 00:03:30.232 Suite: crc16 00:03:30.232 Test: test_crc16_t10dif ...passed 00:03:30.232 Test: test_crc16_t10dif_seed ...passed 00:03:30.232 Test: test_crc16_t10dif_copy ...passed 00:03:30.232 00:03:30.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.232 suites 1 1 n/a 0 0 00:03:30.232 tests 3 3 3 0 0 00:03:30.232 asserts 5 5 5 0 n/a 00:03:30.232 00:03:30.232 Elapsed time = 0.000 seconds 00:03:30.232 10:07:35 unittest.unittest_util -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:30.232 00:03:30.232 00:03:30.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.232 http://cunit.sourceforge.net/ 00:03:30.232 00:03:30.232 00:03:30.232 Suite: crc32_ieee 00:03:30.232 Test: test_crc32_ieee ...passed 00:03:30.232 00:03:30.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.232 suites 1 1 n/a 0 0 00:03:30.232 tests 1 1 1 0 0 00:03:30.232 asserts 1 1 1 0 n/a 00:03:30.232 00:03:30.232 Elapsed time = 0.000 seconds 00:03:30.232 10:07:35 unittest.unittest_util -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:30.232 00:03:30.232 00:03:30.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.232 http://cunit.sourceforge.net/ 00:03:30.232 00:03:30.232 00:03:30.232 Suite: crc32c 00:03:30.232 Test: test_crc32c ...passed 00:03:30.232 Test: test_crc32c_nvme ...passed 00:03:30.232 00:03:30.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.232 suites 1 1 n/a 0 0 00:03:30.232 tests 2 2 2 0 0 00:03:30.232 asserts 16 16 16 0 n/a 00:03:30.232 00:03:30.232 Elapsed time = 0.000 seconds 00:03:30.232 10:07:35 unittest.unittest_util -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:30.232 00:03:30.232 00:03:30.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.232 http://cunit.sourceforge.net/ 00:03:30.232 00:03:30.232 00:03:30.232 Suite: crc64 00:03:30.232 Test: test_crc64_nvme ...passed 00:03:30.232 00:03:30.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.232 suites 1 1 n/a 0 0 00:03:30.232 tests 1 1 1 0 0 00:03:30.232 asserts 4 4 4 0 n/a 00:03:30.232 00:03:30.232 Elapsed time = 0.000 seconds 00:03:30.232 10:07:35 unittest.unittest_util -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:30.232 00:03:30.232 00:03:30.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.232 http://cunit.sourceforge.net/ 00:03:30.232 00:03:30.232 00:03:30.232 Suite: string 00:03:30.232 Test: test_parse_ip_addr ...passed 00:03:30.232 Test: test_str_chomp ...passed 00:03:30.232 Test: test_parse_capacity ...passed 00:03:30.232 Test: test_sprintf_append_realloc ...passed 00:03:30.232 Test: test_strtol ...passed 00:03:30.232 Test: test_strtoll ...passed 00:03:30.232 Test: test_strarray ...passed 00:03:30.232 Test: test_strcpy_replace ...passed 00:03:30.232 00:03:30.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.232 suites 1 1 n/a 0 0 00:03:30.232 tests 8 8 8 0 0 00:03:30.232 asserts 161 161 161 0 n/a 00:03:30.232 00:03:30.232 Elapsed time = 0.000 seconds 00:03:30.232 10:07:35 unittest.unittest_util -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:30.232 00:03:30.232 00:03:30.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.232 http://cunit.sourceforge.net/ 00:03:30.232 00:03:30.232 00:03:30.232 Suite: dif 00:03:30.232 Test: dif_generate_and_verify_test ...[2024-06-10 10:07:35.784978] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:30.232 [2024-06-10 10:07:35.785197] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:30.232 [2024-06-10 10:07:35.785252] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:30.232 [2024-06-10 10:07:35.785301] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:30.232 passed 00:03:30.232 Test: dif_disable_check_test ...[2024-06-10 10:07:35.785350] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:30.232 [2024-06-10 10:07:35.785403] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:30.232 [2024-06-10 10:07:35.785565] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:30.232 [2024-06-10 10:07:35.785623] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:30.232 [2024-06-10 10:07:35.785679] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:30.232 passed 00:03:30.232 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-06-10 10:07:35.785839] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:30.232 [2024-06-10 10:07:35.785895] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:30.232 [2024-06-10 10:07:35.785956] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:30.232 [2024-06-10 10:07:35.786017] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:30.232 [2024-06-10 10:07:35.786065] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:30.232 [2024-06-10 10:07:35.786114] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:30.232 [2024-06-10 10:07:35.786173] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:30.232 [2024-06-10 10:07:35.786231] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:30.232 [2024-06-10 10:07:35.786284] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:30.232 [2024-06-10 10:07:35.786346] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:30.232 passed 00:03:30.232 Test: dif_apptag_mask_test ...[2024-06-10 10:07:35.786405] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:30.232 [2024-06-10 10:07:35.786458] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:30.232 [2024-06-10 10:07:35.786517] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:30.232 passed 00:03:30.232 Test: dif_sec_512_md_0_error_test ...passed 00:03:30.232 Test: dif_sec_4096_md_0_error_test ...passed 00:03:30.232 Test: dif_sec_4100_md_128_error_test ...passed 00:03:30.232 Test: dif_guard_seed_test ...passed 00:03:30.232 Test: dif_guard_value_test ...[2024-06-10 10:07:35.786558] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:30.232 [2024-06-10 10:07:35.786582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:30.232 [2024-06-10 10:07:35.786600] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:30.232 [2024-06-10 10:07:35.786624] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:30.232 [2024-06-10 10:07:35.786643] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:30.232 passed 00:03:30.232 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:30.232 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:30.232 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:30.233 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-10 10:07:35.792173] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd0c, Actual=fd4c 00:03:30.233 [2024-06-10 10:07:35.792536] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe61, Actual=fe21 00:03:30.233 [2024-06-10 10:07:35.792875] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.793203] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.793523] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.233 [2024-06-10 10:07:35.793852] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.233 [2024-06-10 10:07:35.794180] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=81c3 00:03:30.233 [2024-06-10 10:07:35.794405] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fe21, Actual=c576 00:03:30.233 [2024-06-10 10:07:35.794645] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1af753ed, Actual=1ab753ed 00:03:30.233 [2024-06-10 10:07:35.794971] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38174660, Actual=38574660 00:03:30.233 [2024-06-10 10:07:35.795300] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.795624] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.795953] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.233 [2024-06-10 10:07:35.796243] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.233 [2024-06-10 10:07:35.796533] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.233 [2024-06-10 10:07:35.796737] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=38574660, Actual=ce0fd342 00:03:30.233 [2024-06-10 10:07:35.796940] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.233 [2024-06-10 10:07:35.797230] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:03:30.233 [2024-06-10 10:07:35.797518] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.797808] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.798097] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1b 00:03:30.233 [2024-06-10 10:07:35.798387] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1b 00:03:30.233 [2024-06-10 10:07:35.798676] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.233 [2024-06-10 10:07:35.798880] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=88010a2d4837a266, Actual=ab3c2a4a0694d517 00:03:30.233 passed 00:03:30.233 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-06-10 10:07:35.798968] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:03:30.233 [2024-06-10 10:07:35.799008] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:03:30.233 [2024-06-10 10:07:35.799047] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.799087] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.799126] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.799164] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.799203] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=81c3 00:03:30.233 [2024-06-10 10:07:35.799236] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c576 00:03:30.233 [2024-06-10 10:07:35.799269] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:03:30.233 [2024-06-10 10:07:35.799308] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:03:30.233 [2024-06-10 10:07:35.799347] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.799385] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.799424] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.799462] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.799501] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.233 [2024-06-10 10:07:35.799534] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ce0fd342 00:03:30.233 [2024-06-10 10:07:35.799567] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.233 [2024-06-10 10:07:35.799605] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:03:30.233 [2024-06-10 10:07:35.799644] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.799682] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.799729] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.233 passed 00:03:30.233 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-06-10 10:07:35.799767] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.233 [2024-06-10 10:07:35.799807] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.233 [2024-06-10 10:07:35.799839] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ab3c2a4a0694d517 00:03:30.233 [2024-06-10 10:07:35.799885] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:03:30.233 [2024-06-10 10:07:35.799924] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:03:30.233 [2024-06-10 10:07:35.799963] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.800002] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.800041] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.800081] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.800120] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=81c3 00:03:30.233 [2024-06-10 10:07:35.800153] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c576 00:03:30.233 [2024-06-10 10:07:35.800186] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:03:30.233 [2024-06-10 10:07:35.800225] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:03:30.233 [2024-06-10 10:07:35.800264] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.800303] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.800342] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.800381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.233 [2024-06-10 10:07:35.800419] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.233 [2024-06-10 10:07:35.800452] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ce0fd342 00:03:30.233 [2024-06-10 10:07:35.800485] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.233 [2024-06-10 10:07:35.800524] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:03:30.233 [2024-06-10 10:07:35.800563] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.800601] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.233 [2024-06-10 10:07:35.800640] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.233 [2024-06-10 10:07:35.800686] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.233 passed 00:03:30.233 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-06-10 10:07:35.800724] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.233 [2024-06-10 10:07:35.800757] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ab3c2a4a0694d517 00:03:30.233 [2024-06-10 10:07:35.800793] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:03:30.233 [2024-06-10 10:07:35.800833] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:03:30.234 [2024-06-10 10:07:35.800871] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.800910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.800949] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.800988] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.801026] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=81c3 00:03:30.234 [2024-06-10 10:07:35.801059] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c576 00:03:30.234 [2024-06-10 10:07:35.801092] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:03:30.234 [2024-06-10 10:07:35.801130] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:03:30.234 [2024-06-10 10:07:35.801169] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.801207] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.801246] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.801285] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.801324] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.234 [2024-06-10 10:07:35.801356] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ce0fd342 00:03:30.234 [2024-06-10 10:07:35.801390] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.234 [2024-06-10 10:07:35.801429] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:03:30.234 [2024-06-10 10:07:35.801467] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.801505] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.801544] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.234 [2024-06-10 10:07:35.801582] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.234 passed 00:03:30.234 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-06-10 10:07:35.801621] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.234 [2024-06-10 10:07:35.801653] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ab3c2a4a0694d517 00:03:30.234 [2024-06-10 10:07:35.801689] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:03:30.234 [2024-06-10 10:07:35.801728] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:03:30.234 [2024-06-10 10:07:35.801766] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.801804] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.801845] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.801884] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.801922] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=81c3 00:03:30.234 [2024-06-10 10:07:35.801955] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c576 00:03:30.234 passed 00:03:30.234 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-06-10 10:07:35.801991] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:03:30.234 [2024-06-10 10:07:35.802029] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:03:30.234 [2024-06-10 10:07:35.802068] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.802107] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.802145] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.802184] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.802222] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.234 [2024-06-10 10:07:35.802255] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ce0fd342 00:03:30.234 [2024-06-10 10:07:35.802287] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.234 [2024-06-10 10:07:35.802326] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:03:30.234 [2024-06-10 10:07:35.802364] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.802402] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.802441] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.234 [2024-06-10 10:07:35.802479] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.234 [2024-06-10 10:07:35.802518] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.234 passed 00:03:30.234 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-06-10 10:07:35.802550] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ab3c2a4a0694d517 00:03:30.234 [2024-06-10 10:07:35.802586] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd0c, Actual=fd4c 00:03:30.234 [2024-06-10 10:07:35.802624] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe61, Actual=fe21 00:03:30.234 [2024-06-10 10:07:35.802663] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.802702] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.802740] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.802779] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 passed 00:03:30.234 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-06-10 10:07:35.802818] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=81c3 00:03:30.234 [2024-06-10 10:07:35.802851] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=c576 00:03:30.234 [2024-06-10 10:07:35.802887] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1af753ed, Actual=1ab753ed 00:03:30.234 [2024-06-10 10:07:35.802925] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38174660, Actual=38574660 00:03:30.234 [2024-06-10 10:07:35.802964] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.803002] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.234 [2024-06-10 10:07:35.803041] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.803079] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400058 00:03:30.234 [2024-06-10 10:07:35.803117] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.235 [2024-06-10 10:07:35.803150] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=ce0fd342 00:03:30.235 [2024-06-10 10:07:35.803183] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.235 [2024-06-10 10:07:35.803221] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4877a266, Actual=88010a2d4837a266 00:03:30.235 [2024-06-10 10:07:35.803260] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.803298] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.803337] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.235 [2024-06-10 10:07:35.803375] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=18 00:03:30.235 [2024-06-10 10:07:35.803413] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.235 [2024-06-10 10:07:35.803446] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=ab3c2a4a0694d517 00:03:30.235 passed 00:03:30.235 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:30.235 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:30.235 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:30.235 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-10 10:07:35.808631] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd0c, Actual=fd4c 00:03:30.235 [2024-06-10 10:07:35.808794] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=d757, Actual=d717 00:03:30.235 [2024-06-10 10:07:35.808954] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.809109] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.809268] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.235 [2024-06-10 10:07:35.809429] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.235 [2024-06-10 10:07:35.809586] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=81c3 00:03:30.235 [2024-06-10 10:07:35.809744] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=5d8c 00:03:30.235 [2024-06-10 10:07:35.809910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1af753ed, Actual=1ab753ed 00:03:30.235 [2024-06-10 10:07:35.810068] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=2d610c7f, Actual=2d210c7f 00:03:30.235 [2024-06-10 10:07:35.810225] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.810383] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.810540] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.235 [2024-06-10 10:07:35.810698] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.235 [2024-06-10 10:07:35.810856] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.235 [2024-06-10 10:07:35.811014] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=8fd3e4cd 00:03:30.235 [2024-06-10 10:07:35.811172] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.235 [2024-06-10 10:07:35.811329] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=2cfef193b2fd14fe, Actual=2cfef193b2bd14fe 00:03:30.235 [2024-06-10 10:07:35.811496] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.811658] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.811823] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1b 00:03:30.235 [2024-06-10 10:07:35.811982] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1b 00:03:30.235 [2024-06-10 10:07:35.812140] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.235 passed 00:03:30.235 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-10 10:07:35.812298] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=8b8854ec38db4984 00:03:30.235 [2024-06-10 10:07:35.812348] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd0c, Actual=fd4c 00:03:30.235 [2024-06-10 10:07:35.812389] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=234c, Actual=230c 00:03:30.235 [2024-06-10 10:07:35.812430] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.812470] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.812511] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.235 [2024-06-10 10:07:35.812552] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.235 [2024-06-10 10:07:35.812592] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=81c3 00:03:30.235 [2024-06-10 10:07:35.812633] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=a997 00:03:30.235 [2024-06-10 10:07:35.812673] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1af753ed, Actual=1ab753ed 00:03:30.235 [2024-06-10 10:07:35.812714] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cf572dfd, Actual=cf172dfd 00:03:30.235 [2024-06-10 10:07:35.812755] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.812794] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.812835] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.235 [2024-06-10 10:07:35.812875] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.235 [2024-06-10 10:07:35.812915] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.235 [2024-06-10 10:07:35.812956] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=6de5c54f 00:03:30.235 [2024-06-10 10:07:35.813001] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.235 [2024-06-10 10:07:35.813042] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cc6365738dc11f64, Actual=cc6365738d811f64 00:03:30.235 [2024-06-10 10:07:35.813083] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.813124] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.813164] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:03:30.235 [2024-06-10 10:07:35.813205] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:03:30.235 [2024-06-10 10:07:35.813246] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.235 passed 00:03:30.235 Test: dix_sec_512_md_0_error ...passed 00:03:30.235 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-06-10 10:07:35.813286] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=6b15c00c07e7421e 00:03:30.235 [2024-06-10 10:07:35.813298] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:30.235 passed 00:03:30.235 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:30.235 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:30.235 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:30.235 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:30.235 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:30.235 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:30.235 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:30.235 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:30.235 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-10 10:07:35.818292] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd0c, Actual=fd4c 00:03:30.235 [2024-06-10 10:07:35.818454] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=d757, Actual=d717 00:03:30.235 [2024-06-10 10:07:35.818615] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.235 [2024-06-10 10:07:35.818775] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.818933] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.236 [2024-06-10 10:07:35.819090] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.236 [2024-06-10 10:07:35.819248] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=fd4c, Actual=81c3 00:03:30.236 [2024-06-10 10:07:35.819407] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=66db, Actual=5d8c 00:03:30.236 [2024-06-10 10:07:35.819563] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1af753ed, Actual=1ab753ed 00:03:30.236 [2024-06-10 10:07:35.819728] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=2d610c7f, Actual=2d210c7f 00:03:30.236 [2024-06-10 10:07:35.819891] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.820047] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.820203] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.236 [2024-06-10 10:07:35.820358] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=40005b 00:03:30.236 [2024-06-10 10:07:35.820514] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.236 [2024-06-10 10:07:35.820670] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=798b71ef, Actual=8fd3e4cd 00:03:30.236 [2024-06-10 10:07:35.820827] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.236 [2024-06-10 10:07:35.820984] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=2cfef193b2fd14fe, Actual=2cfef193b2bd14fe 00:03:30.236 [2024-06-10 10:07:35.821140] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.821296] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=91, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.821453] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1b 00:03:30.236 [2024-06-10 10:07:35.821609] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=91, Expected=5b, Actual=1b 00:03:30.236 [2024-06-10 10:07:35.821766] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.236 [2024-06-10 10:07:35.821922] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=91, Expected=a8b5748b76783ef5, Actual=8b8854ec38db4984 00:03:30.236 passed 00:03:30.236 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-10 10:07:35.821970] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd0c, Actual=fd4c 00:03:30.236 [2024-06-10 10:07:35.822011] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=234c, Actual=230c 00:03:30.236 [2024-06-10 10:07:35.822059] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.822101] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.822141] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.236 [2024-06-10 10:07:35.822181] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.236 [2024-06-10 10:07:35.822222] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=81c3 00:03:30.236 [2024-06-10 10:07:35.822262] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=a997 00:03:30.236 [2024-06-10 10:07:35.822303] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1af753ed, Actual=1ab753ed 00:03:30.236 [2024-06-10 10:07:35.822343] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cf572dfd, Actual=cf172dfd 00:03:30.236 [2024-06-10 10:07:35.822383] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.822422] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.822462] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.236 [2024-06-10 10:07:35.822502] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=400059 00:03:30.236 [2024-06-10 10:07:35.822541] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=dab0ae1c 00:03:30.236 [2024-06-10 10:07:35.822580] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=6de5c54f 00:03:30.236 [2024-06-10 10:07:35.822621] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728e8c20d3, Actual=a576a7728ecc20d3 00:03:30.236 [2024-06-10 10:07:35.822662] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=cc6365738dc11f64, Actual=cc6365738d811f64 00:03:30.236 [2024-06-10 10:07:35.822703] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.822743] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=c8 00:03:30.236 [2024-06-10 10:07:35.822783] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:03:30.236 [2024-06-10 10:07:35.822823] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=19 00:03:30.236 passed 00:03:30.236 Test: set_md_interleave_iovs_test ...[2024-06-10 10:07:35.822864] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=1dc466c076db6d40 00:03:30.236 [2024-06-10 10:07:35.822904] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=6b15c00c07e7421e 00:03:30.236 passed 00:03:30.236 Test: set_md_interleave_iovs_split_test ...passed 00:03:30.236 Test: dif_generate_stream_pi_16_test ...passed 00:03:30.236 Test: dif_generate_stream_test ...passed 00:03:30.236 Test: set_md_interleave_iovs_alignment_test ...[2024-06-10 10:07:35.823756] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:30.236 passed 00:03:30.236 Test: dif_generate_split_test ...passed 00:03:30.236 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:30.236 Test: dif_verify_split_test ...passed 00:03:30.530 Test: dif_verify_stream_multi_segments_test ...passed 00:03:30.530 Test: update_crc32c_pi_16_test ...passed 00:03:30.530 Test: update_crc32c_test ...passed 00:03:30.530 Test: dif_update_crc32c_split_test ...passed 00:03:30.530 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:30.530 Test: get_range_with_md_test ...passed 00:03:30.530 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:30.530 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:30.530 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:30.530 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:30.530 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:30.530 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:30.530 Test: dif_generate_and_verify_unmap_test ...passed 00:03:30.530 00:03:30.530 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.530 suites 1 1 n/a 0 0 00:03:30.530 tests 79 79 79 0 0 00:03:30.530 asserts 3584 3584 3584 0 n/a 00:03:30.530 00:03:30.530 Elapsed time = 0.047 seconds 00:03:30.530 10:07:35 unittest.unittest_util -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:30.530 00:03:30.530 00:03:30.530 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.530 http://cunit.sourceforge.net/ 00:03:30.530 00:03:30.530 00:03:30.530 Suite: iov 00:03:30.530 Test: test_single_iov ...passed 00:03:30.530 Test: test_simple_iov ...passed 00:03:30.530 Test: test_complex_iov ...passed 00:03:30.530 Test: test_iovs_to_buf ...passed 00:03:30.530 Test: test_buf_to_iovs ...passed 00:03:30.530 Test: test_memset ...passed 00:03:30.530 Test: test_iov_one ...passed 00:03:30.530 Test: test_iov_xfer ...passed 00:03:30.530 00:03:30.530 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.530 suites 1 1 n/a 0 0 00:03:30.530 tests 8 8 8 0 0 00:03:30.530 asserts 156 156 156 0 n/a 00:03:30.530 00:03:30.530 Elapsed time = 0.000 seconds 00:03:30.530 10:07:35 unittest.unittest_util -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:30.530 00:03:30.530 00:03:30.530 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.530 http://cunit.sourceforge.net/ 00:03:30.530 00:03:30.530 00:03:30.530 Suite: math 00:03:30.530 Test: test_serial_number_arithmetic ...passed 00:03:30.530 Suite: erase 00:03:30.530 Test: test_memset_s ...passed 00:03:30.530 00:03:30.530 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.530 suites 2 2 n/a 0 0 00:03:30.530 tests 2 2 2 0 0 00:03:30.530 asserts 18 18 18 0 n/a 00:03:30.530 00:03:30.530 Elapsed time = 0.000 seconds 00:03:30.530 10:07:35 unittest.unittest_util -- unit/unittest.sh@145 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:30.530 00:03:30.530 00:03:30.530 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.530 http://cunit.sourceforge.net/ 00:03:30.530 00:03:30.530 00:03:30.530 Suite: pipe 00:03:30.530 Test: test_create_destroy ...passed 00:03:30.530 Test: test_write_get_buffer ...passed 00:03:30.530 Test: test_write_advance ...passed 00:03:30.531 Test: test_read_get_buffer ...passed 00:03:30.531 Test: test_read_advance ...passed 00:03:30.531 Test: test_data ...passed 00:03:30.531 00:03:30.531 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.531 suites 1 1 n/a 0 0 00:03:30.531 tests 6 6 6 0 0 00:03:30.531 asserts 251 251 251 0 n/a 00:03:30.531 00:03:30.531 Elapsed time = 0.000 seconds 00:03:30.531 10:07:35 unittest.unittest_util -- unit/unittest.sh@146 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:30.531 00:03:30.531 00:03:30.531 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.531 http://cunit.sourceforge.net/ 00:03:30.531 00:03:30.531 00:03:30.531 Suite: xor 00:03:30.531 Test: test_xor_gen ...passed 00:03:30.531 00:03:30.531 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.531 suites 1 1 n/a 0 0 00:03:30.531 tests 1 1 1 0 0 00:03:30.531 asserts 17 17 17 0 n/a 00:03:30.531 00:03:30.531 Elapsed time = 0.000 seconds 00:03:30.531 00:03:30.531 real 0m0.119s 00:03:30.531 user 0m0.064s 00:03:30.531 sys 0m0.055s 00:03:30.531 10:07:35 unittest.unittest_util -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.531 ************************************ 00:03:30.531 END TEST unittest_util 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 10:07:35 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:30.531 10:07:35 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 START TEST unittest_dma 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:30.531 00:03:30.531 00:03:30.531 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.531 http://cunit.sourceforge.net/ 00:03:30.531 00:03:30.531 00:03:30.531 Suite: dma_suite 00:03:30.531 Test: test_dma ...passed 00:03:30.531 00:03:30.531 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.531 suites 1 1 n/a 0 0 00:03:30.531 tests 1 1 1 0 0 00:03:30.531 asserts 54 54 54 0 n/a 00:03:30.531 00:03:30.531 Elapsed time = 0.000 seconds 00:03:30.531 [2024-06-10 10:07:35.895997] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:30.531 00:03:30.531 real 0m0.004s 00:03:30.531 user 0m0.000s 00:03:30.531 sys 0m0.008s 00:03:30.531 10:07:35 unittest.unittest_dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.531 10:07:35 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 END TEST unittest_dma 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 START TEST unittest_init 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest.unittest_init -- common/autotest_common.sh@1124 -- # unittest_init 00:03:30.531 10:07:35 unittest.unittest_init -- unit/unittest.sh@150 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:30.531 00:03:30.531 00:03:30.531 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.531 http://cunit.sourceforge.net/ 00:03:30.531 00:03:30.531 00:03:30.531 Suite: subsystem_suite 00:03:30.531 Test: subsystem_sort_test_depends_on_single ...passed 00:03:30.531 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:30.531 Test: subsystem_sort_test_missing_dependency ...[2024-06-10 10:07:35.932884] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:30.531 passed 00:03:30.531 00:03:30.531 [2024-06-10 10:07:35.933085] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:30.531 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.531 suites 1 1 n/a 0 0 00:03:30.531 tests 3 3 3 0 0 00:03:30.531 asserts 20 20 20 0 n/a 00:03:30.531 00:03:30.531 Elapsed time = 0.000 seconds 00:03:30.531 00:03:30.531 real 0m0.005s 00:03:30.531 user 0m0.000s 00:03:30.531 sys 0m0.004s 00:03:30.531 10:07:35 unittest.unittest_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.531 10:07:35 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 END TEST unittest_init 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 START TEST unittest_keyring 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:30.531 00:03:30.531 00:03:30.531 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.531 http://cunit.sourceforge.net/ 00:03:30.531 00:03:30.531 00:03:30.531 Suite: keyring 00:03:30.531 Test: test_keyring_add_remove ...[2024-06-10 10:07:35.967039] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:03:30.531 [2024-06-10 10:07:35.967220] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:03:30.531 [2024-06-10 10:07:35.967237] /usr/home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:03:30.531 passed 00:03:30.531 Test: test_keyring_get_put ...passed 00:03:30.531 00:03:30.531 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.531 suites 1 1 n/a 0 0 00:03:30.531 tests 2 2 2 0 0 00:03:30.531 asserts 44 44 44 0 n/a 00:03:30.531 00:03:30.531 Elapsed time = 0.000 seconds 00:03:30.531 00:03:30.531 real 0m0.005s 00:03:30.531 user 0m0.000s 00:03:30.531 sys 0m0.006s 00:03:30.531 10:07:35 unittest.unittest_keyring -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.531 10:07:35 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 END TEST unittest_keyring 00:03:30.531 ************************************ 00:03:30.531 10:07:35 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:03:30.531 10:07:35 unittest -- unit/unittest.sh@305 -- # set +x 00:03:30.531 00:03:30.531 00:03:30.531 ===================== 00:03:30.531 All unit tests passed 00:03:30.531 ===================== 00:03:30.531 WARN: lcov not installed or SPDK built without coverage! 00:03:30.531 WARN: neither valgrind nor ASAN is enabled! 00:03:30.531 00:03:30.531 00:03:30.531 00:03:30.531 real 0m13.582s 00:03:30.531 user 0m10.905s 00:03:30.531 sys 0m1.448s 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:30.531 10:07:35 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 END TEST unittest 00:03:30.531 ************************************ 00:03:30.531 10:07:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:30.531 10:07:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:30.531 10:07:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:30.531 10:07:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:30.531 10:07:36 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:30.531 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 10:07:36 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:30.531 10:07:36 -- spdk/autotest.sh@168 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:30.531 10:07:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.531 10:07:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.531 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:03:30.531 ************************************ 00:03:30.531 START TEST env 00:03:30.531 ************************************ 00:03:30.531 10:07:36 env -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:30.815 * Looking for test storage... 00:03:30.815 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:03:30.815 10:07:36 env -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:30.815 10:07:36 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:30.815 10:07:36 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:30.815 10:07:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.073 ************************************ 00:03:31.073 START TEST env_memory 00:03:31.073 ************************************ 00:03:31.073 10:07:36 env.env_memory -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:31.073 00:03:31.074 00:03:31.074 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.074 http://cunit.sourceforge.net/ 00:03:31.074 00:03:31.074 00:03:31.074 Suite: memory 00:03:31.074 Test: alloc and free memory map ...[2024-06-10 10:07:36.431913] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:31.074 passed 00:03:31.074 Test: mem map translation ...[2024-06-10 10:07:36.439198] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:31.074 [2024-06-10 10:07:36.439234] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:31.074 [2024-06-10 10:07:36.439250] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:31.074 [2024-06-10 10:07:36.439260] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:31.074 passed 00:03:31.074 Test: mem map registration ...[2024-06-10 10:07:36.451082] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:31.074 [2024-06-10 10:07:36.451122] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:31.074 passed 00:03:31.074 Test: mem map adjacent registrations ...passed 00:03:31.074 00:03:31.074 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.074 suites 1 1 n/a 0 0 00:03:31.074 tests 4 4 4 0 0 00:03:31.074 asserts 152 152 152 0 n/a 00:03:31.074 00:03:31.074 Elapsed time = 0.055 seconds 00:03:31.074 00:03:31.074 real 0m0.061s 00:03:31.074 user 0m0.059s 00:03:31.074 sys 0m0.003s 00:03:31.074 10:07:36 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:31.074 10:07:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:31.074 ************************************ 00:03:31.074 END TEST env_memory 00:03:31.074 ************************************ 00:03:31.074 10:07:36 env -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:31.074 10:07:36 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:31.074 10:07:36 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:31.074 10:07:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.074 ************************************ 00:03:31.074 START TEST env_vtophys 00:03:31.074 ************************************ 00:03:31.074 10:07:36 env.env_vtophys -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:31.074 EAL: lib.eal log level changed from notice to debug 00:03:31.074 EAL: Sysctl reports 10 cpus 00:03:31.074 EAL: Detected lcore 0 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 1 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 2 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 3 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 4 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 5 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 6 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 7 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 8 as core 0 on socket 0 00:03:31.074 EAL: Detected lcore 9 as core 0 on socket 0 00:03:31.074 EAL: Maximum logical cores by configuration: 128 00:03:31.074 EAL: Detected CPU lcores: 10 00:03:31.074 EAL: Detected NUMA nodes: 1 00:03:31.074 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:31.074 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:31.074 EAL: Checking presence of .so 'librte_eal.so' 00:03:31.074 EAL: Detected static linkage of DPDK 00:03:31.074 EAL: No shared files mode enabled, IPC will be disabled 00:03:31.074 EAL: PCI scan found 10 devices 00:03:31.074 EAL: Specific IOVA mode is not requested, autodetecting 00:03:31.074 EAL: Selecting IOVA mode according to bus requests 00:03:31.074 EAL: Bus pci wants IOVA as 'PA' 00:03:31.074 EAL: Selected IOVA mode 'PA' 00:03:31.074 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:31.074 EAL: Ask a virtual area of 0x2e000 bytes 00:03:31.074 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x100021a000) not respected! 00:03:31.074 EAL: This may cause issues with mapping memory into secondary processes 00:03:31.074 EAL: Virtual area found at 0x100021a000 (size = 0x2e000) 00:03:31.074 EAL: Setting up physically contiguous memory... 00:03:31.074 EAL: Ask a virtual area of 0x1000 bytes 00:03:31.074 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x10002ca000) not respected! 00:03:31.074 EAL: This may cause issues with mapping memory into secondary processes 00:03:31.074 EAL: Virtual area found at 0x10002ca000 (size = 0x1000) 00:03:31.074 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:31.074 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:31.074 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:31.074 EAL: This may cause issues with mapping memory into secondary processes 00:03:31.074 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:31.074 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:31.074 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x160000000, len 268435456 00:03:31.074 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x170000000, len 268435456 00:03:31.334 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x180000000, len 268435456 00:03:31.334 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x190000000, len 268435456 00:03:31.334 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x1a0000000, len 268435456 00:03:31.334 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x1b0000000, len 268435456 00:03:31.334 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x1c0000000, len 268435456 00:03:31.593 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x1d0000000, len 268435456 00:03:31.593 EAL: No shared files mode enabled, IPC is disabled 00:03:31.593 EAL: Added 2048M to heap on socket 0 00:03:31.593 EAL: TSC is not safe to use in SMP mode 00:03:31.593 EAL: TSC is not invariant 00:03:31.593 EAL: TSC frequency is ~2100000 KHz 00:03:31.593 EAL: Main lcore 0 is ready (tid=82cc29000;cpuset=[0]) 00:03:31.593 EAL: PCI scan found 10 devices 00:03:31.593 EAL: Registering mem event callbacks not supported 00:03:31.593 00:03:31.593 00:03:31.593 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.593 http://cunit.sourceforge.net/ 00:03:31.593 00:03:31.593 00:03:31.594 Suite: components_suite 00:03:31.594 Test: vtophys_malloc_test ...passed 00:03:31.853 Test: vtophys_spdk_malloc_test ...passed 00:03:31.853 00:03:31.853 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.853 suites 1 1 n/a 0 0 00:03:31.853 tests 2 2 2 0 0 00:03:31.853 asserts 497 497 497 0 n/a 00:03:31.853 00:03:31.853 Elapsed time = 0.289 seconds 00:03:31.853 00:03:31.853 real 0m0.767s 00:03:31.853 user 0m0.308s 00:03:31.853 sys 0m0.457s 00:03:31.853 10:07:37 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:31.853 10:07:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:31.853 ************************************ 00:03:31.853 END TEST env_vtophys 00:03:31.853 ************************************ 00:03:31.853 10:07:37 env -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:31.853 10:07:37 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:31.853 10:07:37 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:31.853 10:07:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.853 ************************************ 00:03:31.853 START TEST env_pci 00:03:31.853 ************************************ 00:03:31.853 10:07:37 env.env_pci -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:31.853 00:03:31.853 00:03:31.853 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.853 http://cunit.sourceforge.net/ 00:03:31.853 00:03:31.853 00:03:31.853 Suite: pci 00:03:31.853 Test: pci_hook ...passed 00:03:31.853 00:03:31.853 EAL: Cannot find device (10000:00:01.0) 00:03:31.853 EAL: Failed to attach device on primary process 00:03:31.853 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.853 suites 1 1 n/a 0 0 00:03:31.853 tests 1 1 1 0 0 00:03:31.853 asserts 25 25 25 0 n/a 00:03:31.853 00:03:31.853 Elapsed time = 0.000 seconds 00:03:31.853 00:03:31.853 real 0m0.008s 00:03:31.853 user 0m0.001s 00:03:31.853 sys 0m0.007s 00:03:31.853 10:07:37 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:31.853 ************************************ 00:03:31.853 END TEST env_pci 00:03:31.853 ************************************ 00:03:31.853 10:07:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:31.853 10:07:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:31.853 10:07:37 env -- env/env.sh@15 -- # uname 00:03:31.853 10:07:37 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:31.853 10:07:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:31.853 10:07:37 env -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:03:31.853 10:07:37 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:31.853 10:07:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:31.853 ************************************ 00:03:31.853 START TEST env_dpdk_post_init 00:03:31.853 ************************************ 00:03:31.853 10:07:37 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:31.853 EAL: Sysctl reports 10 cpus 00:03:31.853 EAL: Detected CPU lcores: 10 00:03:31.853 EAL: Detected NUMA nodes: 1 00:03:31.853 EAL: Detected static linkage of DPDK 00:03:31.853 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:31.853 EAL: Selected IOVA mode 'PA' 00:03:31.853 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:32.112 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x160000000, len 268435456 00:03:32.112 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x170000000, len 268435456 00:03:32.112 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x180000000, len 268435456 00:03:32.112 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x190000000, len 268435456 00:03:32.112 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x1a0000000, len 268435456 00:03:32.371 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x1b0000000, len 268435456 00:03:32.371 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x1c0000000, len 268435456 00:03:32.371 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x1d0000000, len 268435456 00:03:32.371 EAL: TSC is not safe to use in SMP mode 00:03:32.371 EAL: TSC is not invariant 00:03:32.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:32.371 [2024-06-10 10:07:37.848940] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:32.371 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:32.371 Starting DPDK initialization... 00:03:32.371 Starting SPDK post initialization... 00:03:32.371 SPDK NVMe probe 00:03:32.371 Attaching to 0000:00:10.0 00:03:32.371 Attached to 0000:00:10.0 00:03:32.371 Cleaning up... 00:03:32.371 00:03:32.371 real 0m0.510s 00:03:32.371 user 0m0.016s 00:03:32.371 sys 0m0.489s 00:03:32.371 10:07:37 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:32.371 ************************************ 00:03:32.371 END TEST env_dpdk_post_init 00:03:32.371 ************************************ 00:03:32.371 10:07:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:32.371 10:07:37 env -- env/env.sh@26 -- # uname 00:03:32.371 10:07:37 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:32.371 00:03:32.371 real 0m1.904s 00:03:32.371 user 0m0.600s 00:03:32.371 sys 0m1.390s 00:03:32.371 10:07:37 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:32.371 ************************************ 00:03:32.371 10:07:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:32.371 END TEST env 00:03:32.371 ************************************ 00:03:32.371 10:07:37 -- spdk/autotest.sh@169 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:32.371 10:07:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:32.371 10:07:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:32.371 10:07:37 -- common/autotest_common.sh@10 -- # set +x 00:03:32.630 ************************************ 00:03:32.630 START TEST rpc 00:03:32.630 ************************************ 00:03:32.630 10:07:37 rpc -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:32.630 * Looking for test storage... 00:03:32.630 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:32.630 10:07:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=46273 00:03:32.630 10:07:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:32.630 10:07:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 46273 00:03:32.630 10:07:38 rpc -- common/autotest_common.sh@830 -- # '[' -z 46273 ']' 00:03:32.630 10:07:38 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:32.630 10:07:38 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:03:32.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:32.630 10:07:38 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:32.630 10:07:38 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:03:32.630 10:07:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:32.630 10:07:38 rpc -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:32.630 [2024-06-10 10:07:38.188300] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:03:32.630 [2024-06-10 10:07:38.188492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:33.197 EAL: TSC is not safe to use in SMP mode 00:03:33.197 EAL: TSC is not invariant 00:03:33.197 [2024-06-10 10:07:38.647560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.197 [2024-06-10 10:07:38.723494] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:33.197 [2024-06-10 10:07:38.725590] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.197 [2024-06-10 10:07:38.725618] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 46273' to capture a snapshot of events at runtime. 00:03:33.197 [2024-06-10 10:07:38.725642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.764 10:07:39 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:03:33.764 10:07:39 rpc -- common/autotest_common.sh@863 -- # return 0 00:03:33.764 10:07:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:33.764 10:07:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:33.764 10:07:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:33.764 10:07:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:33.765 10:07:39 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:33.765 10:07:39 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:33.765 10:07:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 ************************************ 00:03:33.765 START TEST rpc_integrity 00:03:33.765 ************************************ 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:33.765 { 00:03:33.765 "name": "Malloc0", 00:03:33.765 "aliases": [ 00:03:33.765 "44af6d70-2711-11ef-b084-113036b5c18d" 00:03:33.765 ], 00:03:33.765 "product_name": "Malloc disk", 00:03:33.765 "block_size": 512, 00:03:33.765 "num_blocks": 16384, 00:03:33.765 "uuid": "44af6d70-2711-11ef-b084-113036b5c18d", 00:03:33.765 "assigned_rate_limits": { 00:03:33.765 "rw_ios_per_sec": 0, 00:03:33.765 "rw_mbytes_per_sec": 0, 00:03:33.765 "r_mbytes_per_sec": 0, 00:03:33.765 "w_mbytes_per_sec": 0 00:03:33.765 }, 00:03:33.765 "claimed": false, 00:03:33.765 "zoned": false, 00:03:33.765 "supported_io_types": { 00:03:33.765 "read": true, 00:03:33.765 "write": true, 00:03:33.765 "unmap": true, 00:03:33.765 "write_zeroes": true, 00:03:33.765 "flush": true, 00:03:33.765 "reset": true, 00:03:33.765 "compare": false, 00:03:33.765 "compare_and_write": false, 00:03:33.765 "abort": true, 00:03:33.765 "nvme_admin": false, 00:03:33.765 "nvme_io": false 00:03:33.765 }, 00:03:33.765 "memory_domains": [ 00:03:33.765 { 00:03:33.765 "dma_device_id": "system", 00:03:33.765 "dma_device_type": 1 00:03:33.765 }, 00:03:33.765 { 00:03:33.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.765 "dma_device_type": 2 00:03:33.765 } 00:03:33.765 ], 00:03:33.765 "driver_specific": {} 00:03:33.765 } 00:03:33.765 ]' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 [2024-06-10 10:07:39.240919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:33.765 [2024-06-10 10:07:39.240954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:33.765 [2024-06-10 10:07:39.241440] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cf45a00 00:03:33.765 [2024-06-10 10:07:39.241460] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:33.765 [2024-06-10 10:07:39.242166] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:33.765 [2024-06-10 10:07:39.242193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:33.765 Passthru0 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:33.765 { 00:03:33.765 "name": "Malloc0", 00:03:33.765 "aliases": [ 00:03:33.765 "44af6d70-2711-11ef-b084-113036b5c18d" 00:03:33.765 ], 00:03:33.765 "product_name": "Malloc disk", 00:03:33.765 "block_size": 512, 00:03:33.765 "num_blocks": 16384, 00:03:33.765 "uuid": "44af6d70-2711-11ef-b084-113036b5c18d", 00:03:33.765 "assigned_rate_limits": { 00:03:33.765 "rw_ios_per_sec": 0, 00:03:33.765 "rw_mbytes_per_sec": 0, 00:03:33.765 "r_mbytes_per_sec": 0, 00:03:33.765 "w_mbytes_per_sec": 0 00:03:33.765 }, 00:03:33.765 "claimed": true, 00:03:33.765 "claim_type": "exclusive_write", 00:03:33.765 "zoned": false, 00:03:33.765 "supported_io_types": { 00:03:33.765 "read": true, 00:03:33.765 "write": true, 00:03:33.765 "unmap": true, 00:03:33.765 "write_zeroes": true, 00:03:33.765 "flush": true, 00:03:33.765 "reset": true, 00:03:33.765 "compare": false, 00:03:33.765 "compare_and_write": false, 00:03:33.765 "abort": true, 00:03:33.765 "nvme_admin": false, 00:03:33.765 "nvme_io": false 00:03:33.765 }, 00:03:33.765 "memory_domains": [ 00:03:33.765 { 00:03:33.765 "dma_device_id": "system", 00:03:33.765 "dma_device_type": 1 00:03:33.765 }, 00:03:33.765 { 00:03:33.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.765 "dma_device_type": 2 00:03:33.765 } 00:03:33.765 ], 00:03:33.765 "driver_specific": {} 00:03:33.765 }, 00:03:33.765 { 00:03:33.765 "name": "Passthru0", 00:03:33.765 "aliases": [ 00:03:33.765 "467fcd77-46dc-c251-8afc-ec01ee0b2a0a" 00:03:33.765 ], 00:03:33.765 "product_name": "passthru", 00:03:33.765 "block_size": 512, 00:03:33.765 "num_blocks": 16384, 00:03:33.765 "uuid": "467fcd77-46dc-c251-8afc-ec01ee0b2a0a", 00:03:33.765 "assigned_rate_limits": { 00:03:33.765 "rw_ios_per_sec": 0, 00:03:33.765 "rw_mbytes_per_sec": 0, 00:03:33.765 "r_mbytes_per_sec": 0, 00:03:33.765 "w_mbytes_per_sec": 0 00:03:33.765 }, 00:03:33.765 "claimed": false, 00:03:33.765 "zoned": false, 00:03:33.765 "supported_io_types": { 00:03:33.765 "read": true, 00:03:33.765 "write": true, 00:03:33.765 "unmap": true, 00:03:33.765 "write_zeroes": true, 00:03:33.765 "flush": true, 00:03:33.765 "reset": true, 00:03:33.765 "compare": false, 00:03:33.765 "compare_and_write": false, 00:03:33.765 "abort": true, 00:03:33.765 "nvme_admin": false, 00:03:33.765 "nvme_io": false 00:03:33.765 }, 00:03:33.765 "memory_domains": [ 00:03:33.765 { 00:03:33.765 "dma_device_id": "system", 00:03:33.765 "dma_device_type": 1 00:03:33.765 }, 00:03:33.765 { 00:03:33.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.765 "dma_device_type": 2 00:03:33.765 } 00:03:33.765 ], 00:03:33.765 "driver_specific": { 00:03:33.765 "passthru": { 00:03:33.765 "name": "Passthru0", 00:03:33.765 "base_bdev_name": "Malloc0" 00:03:33.765 } 00:03:33.765 } 00:03:33.765 } 00:03:33.765 ]' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:33.765 10:07:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:33.765 00:03:33.765 real 0m0.146s 00:03:33.765 user 0m0.060s 00:03:33.765 sys 0m0.017s 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:33.765 ************************************ 00:03:33.765 END TEST rpc_integrity 00:03:33.765 10:07:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.765 ************************************ 00:03:33.765 10:07:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:33.765 10:07:39 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:33.765 10:07:39 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:33.765 10:07:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 ************************************ 00:03:34.024 START TEST rpc_plugins 00:03:34.024 ************************************ 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.024 { 00:03:34.024 "name": "Malloc1", 00:03:34.024 "aliases": [ 00:03:34.024 "44c90f3c-2711-11ef-b084-113036b5c18d" 00:03:34.024 ], 00:03:34.024 "product_name": "Malloc disk", 00:03:34.024 "block_size": 4096, 00:03:34.024 "num_blocks": 256, 00:03:34.024 "uuid": "44c90f3c-2711-11ef-b084-113036b5c18d", 00:03:34.024 "assigned_rate_limits": { 00:03:34.024 "rw_ios_per_sec": 0, 00:03:34.024 "rw_mbytes_per_sec": 0, 00:03:34.024 "r_mbytes_per_sec": 0, 00:03:34.024 "w_mbytes_per_sec": 0 00:03:34.024 }, 00:03:34.024 "claimed": false, 00:03:34.024 "zoned": false, 00:03:34.024 "supported_io_types": { 00:03:34.024 "read": true, 00:03:34.024 "write": true, 00:03:34.024 "unmap": true, 00:03:34.024 "write_zeroes": true, 00:03:34.024 "flush": true, 00:03:34.024 "reset": true, 00:03:34.024 "compare": false, 00:03:34.024 "compare_and_write": false, 00:03:34.024 "abort": true, 00:03:34.024 "nvme_admin": false, 00:03:34.024 "nvme_io": false 00:03:34.024 }, 00:03:34.024 "memory_domains": [ 00:03:34.024 { 00:03:34.024 "dma_device_id": "system", 00:03:34.024 "dma_device_type": 1 00:03:34.024 }, 00:03:34.024 { 00:03:34.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.024 "dma_device_type": 2 00:03:34.024 } 00:03:34.024 ], 00:03:34.024 "driver_specific": {} 00:03:34.024 } 00:03:34.024 ]' 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.024 10:07:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.024 00:03:34.024 real 0m0.079s 00:03:34.024 user 0m0.015s 00:03:34.024 sys 0m0.025s 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.024 ************************************ 00:03:34.024 10:07:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 END TEST rpc_plugins 00:03:34.024 ************************************ 00:03:34.024 10:07:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.024 10:07:39 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.024 10:07:39 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.024 10:07:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 ************************************ 00:03:34.024 START TEST rpc_trace_cmd_test 00:03:34.024 ************************************ 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.024 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid46273", 00:03:34.024 "tpoint_group_mask": "0x8", 00:03:34.024 "iscsi_conn": { 00:03:34.024 "mask": "0x2", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "scsi": { 00:03:34.024 "mask": "0x4", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "bdev": { 00:03:34.024 "mask": "0x8", 00:03:34.024 "tpoint_mask": "0xffffffffffffffff" 00:03:34.024 }, 00:03:34.024 "nvmf_rdma": { 00:03:34.024 "mask": "0x10", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "nvmf_tcp": { 00:03:34.024 "mask": "0x20", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "blobfs": { 00:03:34.024 "mask": "0x80", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "dsa": { 00:03:34.024 "mask": "0x200", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "thread": { 00:03:34.024 "mask": "0x400", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "nvme_pcie": { 00:03:34.024 "mask": "0x800", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "iaa": { 00:03:34.024 "mask": "0x1000", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "nvme_tcp": { 00:03:34.024 "mask": "0x2000", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "bdev_nvme": { 00:03:34.024 "mask": "0x4000", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 }, 00:03:34.024 "sock": { 00:03:34.024 "mask": "0x8000", 00:03:34.024 "tpoint_mask": "0x0" 00:03:34.024 } 00:03:34.024 }' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:34.024 00:03:34.024 real 0m0.056s 00:03:34.024 user 0m0.025s 00:03:34.024 sys 0m0.024s 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 ************************************ 00:03:34.024 END TEST rpc_trace_cmd_test 00:03:34.024 ************************************ 00:03:34.024 10:07:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:34.024 10:07:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:34.024 10:07:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:34.024 10:07:39 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.024 10:07:39 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.024 10:07:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 ************************************ 00:03:34.024 START TEST rpc_daemon_integrity 00:03:34.024 ************************************ 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.024 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.284 { 00:03:34.284 "name": "Malloc2", 00:03:34.284 "aliases": [ 00:03:34.284 "44edaf14-2711-11ef-b084-113036b5c18d" 00:03:34.284 ], 00:03:34.284 "product_name": "Malloc disk", 00:03:34.284 "block_size": 512, 00:03:34.284 "num_blocks": 16384, 00:03:34.284 "uuid": "44edaf14-2711-11ef-b084-113036b5c18d", 00:03:34.284 "assigned_rate_limits": { 00:03:34.284 "rw_ios_per_sec": 0, 00:03:34.284 "rw_mbytes_per_sec": 0, 00:03:34.284 "r_mbytes_per_sec": 0, 00:03:34.284 "w_mbytes_per_sec": 0 00:03:34.284 }, 00:03:34.284 "claimed": false, 00:03:34.284 "zoned": false, 00:03:34.284 "supported_io_types": { 00:03:34.284 "read": true, 00:03:34.284 "write": true, 00:03:34.284 "unmap": true, 00:03:34.284 "write_zeroes": true, 00:03:34.284 "flush": true, 00:03:34.284 "reset": true, 00:03:34.284 "compare": false, 00:03:34.284 "compare_and_write": false, 00:03:34.284 "abort": true, 00:03:34.284 "nvme_admin": false, 00:03:34.284 "nvme_io": false 00:03:34.284 }, 00:03:34.284 "memory_domains": [ 00:03:34.284 { 00:03:34.284 "dma_device_id": "system", 00:03:34.284 "dma_device_type": 1 00:03:34.284 }, 00:03:34.284 { 00:03:34.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.284 "dma_device_type": 2 00:03:34.284 } 00:03:34.284 ], 00:03:34.284 "driver_specific": {} 00:03:34.284 } 00:03:34.284 ]' 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 [2024-06-10 10:07:39.652967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:34.284 [2024-06-10 10:07:39.653009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.284 [2024-06-10 10:07:39.653033] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cf45a00 00:03:34.284 [2024-06-10 10:07:39.653040] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.284 [2024-06-10 10:07:39.653513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.284 [2024-06-10 10:07:39.653541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.284 Passthru0 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.284 { 00:03:34.284 "name": "Malloc2", 00:03:34.284 "aliases": [ 00:03:34.284 "44edaf14-2711-11ef-b084-113036b5c18d" 00:03:34.284 ], 00:03:34.284 "product_name": "Malloc disk", 00:03:34.284 "block_size": 512, 00:03:34.284 "num_blocks": 16384, 00:03:34.284 "uuid": "44edaf14-2711-11ef-b084-113036b5c18d", 00:03:34.284 "assigned_rate_limits": { 00:03:34.284 "rw_ios_per_sec": 0, 00:03:34.284 "rw_mbytes_per_sec": 0, 00:03:34.284 "r_mbytes_per_sec": 0, 00:03:34.284 "w_mbytes_per_sec": 0 00:03:34.284 }, 00:03:34.284 "claimed": true, 00:03:34.284 "claim_type": "exclusive_write", 00:03:34.284 "zoned": false, 00:03:34.284 "supported_io_types": { 00:03:34.284 "read": true, 00:03:34.284 "write": true, 00:03:34.284 "unmap": true, 00:03:34.284 "write_zeroes": true, 00:03:34.284 "flush": true, 00:03:34.284 "reset": true, 00:03:34.284 "compare": false, 00:03:34.284 "compare_and_write": false, 00:03:34.284 "abort": true, 00:03:34.284 "nvme_admin": false, 00:03:34.284 "nvme_io": false 00:03:34.284 }, 00:03:34.284 "memory_domains": [ 00:03:34.284 { 00:03:34.284 "dma_device_id": "system", 00:03:34.284 "dma_device_type": 1 00:03:34.284 }, 00:03:34.284 { 00:03:34.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.284 "dma_device_type": 2 00:03:34.284 } 00:03:34.284 ], 00:03:34.284 "driver_specific": {} 00:03:34.284 }, 00:03:34.284 { 00:03:34.284 "name": "Passthru0", 00:03:34.284 "aliases": [ 00:03:34.284 "431564ed-f966-605d-978f-972c3e369c86" 00:03:34.284 ], 00:03:34.284 "product_name": "passthru", 00:03:34.284 "block_size": 512, 00:03:34.284 "num_blocks": 16384, 00:03:34.284 "uuid": "431564ed-f966-605d-978f-972c3e369c86", 00:03:34.284 "assigned_rate_limits": { 00:03:34.284 "rw_ios_per_sec": 0, 00:03:34.284 "rw_mbytes_per_sec": 0, 00:03:34.284 "r_mbytes_per_sec": 0, 00:03:34.284 "w_mbytes_per_sec": 0 00:03:34.284 }, 00:03:34.284 "claimed": false, 00:03:34.284 "zoned": false, 00:03:34.284 "supported_io_types": { 00:03:34.284 "read": true, 00:03:34.284 "write": true, 00:03:34.284 "unmap": true, 00:03:34.284 "write_zeroes": true, 00:03:34.284 "flush": true, 00:03:34.284 "reset": true, 00:03:34.284 "compare": false, 00:03:34.284 "compare_and_write": false, 00:03:34.284 "abort": true, 00:03:34.284 "nvme_admin": false, 00:03:34.284 "nvme_io": false 00:03:34.284 }, 00:03:34.284 "memory_domains": [ 00:03:34.284 { 00:03:34.284 "dma_device_id": "system", 00:03:34.284 "dma_device_type": 1 00:03:34.284 }, 00:03:34.284 { 00:03:34.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.284 "dma_device_type": 2 00:03:34.284 } 00:03:34.284 ], 00:03:34.284 "driver_specific": { 00:03:34.284 "passthru": { 00:03:34.284 "name": "Passthru0", 00:03:34.284 "base_bdev_name": "Malloc2" 00:03:34.284 } 00:03:34.284 } 00:03:34.284 } 00:03:34.284 ]' 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.284 00:03:34.284 real 0m0.137s 00:03:34.284 user 0m0.029s 00:03:34.284 sys 0m0.045s 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.284 ************************************ 00:03:34.284 END TEST rpc_daemon_integrity 00:03:34.284 ************************************ 00:03:34.284 10:07:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.284 10:07:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:34.284 10:07:39 rpc -- rpc/rpc.sh@84 -- # killprocess 46273 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@949 -- # '[' -z 46273 ']' 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@953 -- # kill -0 46273 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@954 -- # uname 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@957 -- # ps -c -o command 46273 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@957 -- # tail -1 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:03:34.284 killing process with pid 46273 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 46273' 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@968 -- # kill 46273 00:03:34.284 10:07:39 rpc -- common/autotest_common.sh@973 -- # wait 46273 00:03:34.544 00:03:34.544 real 0m2.027s 00:03:34.544 user 0m2.078s 00:03:34.544 sys 0m0.866s 00:03:34.544 10:07:40 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:34.544 10:07:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.544 ************************************ 00:03:34.544 END TEST rpc 00:03:34.544 ************************************ 00:03:34.544 10:07:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:34.544 10:07:40 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.544 10:07:40 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.544 10:07:40 -- common/autotest_common.sh@10 -- # set +x 00:03:34.544 ************************************ 00:03:34.544 START TEST skip_rpc 00:03:34.544 ************************************ 00:03:34.544 10:07:40 skip_rpc -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:34.803 * Looking for test storage... 00:03:34.803 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:34.803 10:07:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:34.803 10:07:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:34.803 10:07:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:34.803 10:07:40 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:34.803 10:07:40 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:34.803 10:07:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.803 ************************************ 00:03:34.803 START TEST skip_rpc 00:03:34.803 ************************************ 00:03:34.803 10:07:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:03:34.803 10:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=46449 00:03:34.803 10:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:34.803 10:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:34.803 10:07:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:34.803 [2024-06-10 10:07:40.209299] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:03:34.803 [2024-06-10 10:07:40.209466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:35.371 EAL: TSC is not safe to use in SMP mode 00:03:35.371 EAL: TSC is not invariant 00:03:35.371 [2024-06-10 10:07:40.677711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.371 [2024-06-10 10:07:40.755690] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:35.371 [2024-06-10 10:07:40.757777] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 46449 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 46449 ']' 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 46449 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # tail -1 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # ps -c -o command 46449 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:03:40.643 killing process with pid 46449 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 46449' 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 46449 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 46449 00:03:40.643 00:03:40.643 real 0m5.479s 00:03:40.643 user 0m4.955s 00:03:40.643 sys 0m0.549s 00:03:40.643 ************************************ 00:03:40.643 END TEST skip_rpc 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:40.643 10:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.643 ************************************ 00:03:40.643 10:07:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:40.643 10:07:45 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:40.643 10:07:45 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:40.643 10:07:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.643 ************************************ 00:03:40.643 START TEST skip_rpc_with_json 00:03:40.643 ************************************ 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=46494 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 46494 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 46494 ']' 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:03:40.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:03:40.644 10:07:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.644 [2024-06-10 10:07:45.734102] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:03:40.644 [2024-06-10 10:07:45.734280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:40.644 EAL: TSC is not safe to use in SMP mode 00:03:40.644 EAL: TSC is not invariant 00:03:40.902 [2024-06-10 10:07:46.244588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.902 [2024-06-10 10:07:46.336482] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:40.902 [2024-06-10 10:07:46.339113] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.470 [2024-06-10 10:07:46.876973] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.470 request: 00:03:41.470 { 00:03:41.470 "trtype": "tcp", 00:03:41.470 "method": "nvmf_get_transports", 00:03:41.470 "req_id": 1 00:03:41.470 } 00:03:41.470 Got JSON-RPC error response 00:03:41.470 response: 00:03:41.470 { 00:03:41.470 "code": -19, 00:03:41.470 "message": "Operation not supported by device" 00:03:41.470 } 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.470 [2024-06-10 10:07:46.888982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:03:41.470 10:07:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.470 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:03:41.470 10:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:41.470 { 00:03:41.470 "subsystems": [ 00:03:41.470 { 00:03:41.470 "subsystem": "vmd", 00:03:41.470 "config": [] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "iobuf", 00:03:41.470 "config": [ 00:03:41.470 { 00:03:41.470 "method": "iobuf_set_options", 00:03:41.470 "params": { 00:03:41.470 "small_pool_count": 8192, 00:03:41.470 "large_pool_count": 1024, 00:03:41.470 "small_bufsize": 8192, 00:03:41.470 "large_bufsize": 135168 00:03:41.470 } 00:03:41.470 } 00:03:41.470 ] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "scheduler", 00:03:41.470 "config": [ 00:03:41.470 { 00:03:41.470 "method": "framework_set_scheduler", 00:03:41.470 "params": { 00:03:41.470 "name": "static" 00:03:41.470 } 00:03:41.470 } 00:03:41.470 ] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "sock", 00:03:41.470 "config": [ 00:03:41.470 { 00:03:41.470 "method": "sock_set_default_impl", 00:03:41.470 "params": { 00:03:41.470 "impl_name": "posix" 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "sock_impl_set_options", 00:03:41.470 "params": { 00:03:41.470 "impl_name": "ssl", 00:03:41.470 "recv_buf_size": 4096, 00:03:41.470 "send_buf_size": 4096, 00:03:41.470 "enable_recv_pipe": true, 00:03:41.470 "enable_quickack": false, 00:03:41.470 "enable_placement_id": 0, 00:03:41.470 "enable_zerocopy_send_server": true, 00:03:41.470 "enable_zerocopy_send_client": false, 00:03:41.470 "zerocopy_threshold": 0, 00:03:41.470 "tls_version": 0, 00:03:41.470 "enable_ktls": false 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "sock_impl_set_options", 00:03:41.470 "params": { 00:03:41.470 "impl_name": "posix", 00:03:41.470 "recv_buf_size": 2097152, 00:03:41.470 "send_buf_size": 2097152, 00:03:41.470 "enable_recv_pipe": true, 00:03:41.470 "enable_quickack": false, 00:03:41.470 "enable_placement_id": 0, 00:03:41.470 "enable_zerocopy_send_server": true, 00:03:41.470 "enable_zerocopy_send_client": false, 00:03:41.470 "zerocopy_threshold": 0, 00:03:41.470 "tls_version": 0, 00:03:41.470 "enable_ktls": false 00:03:41.470 } 00:03:41.470 } 00:03:41.470 ] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "keyring", 00:03:41.470 "config": [] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "accel", 00:03:41.470 "config": [ 00:03:41.470 { 00:03:41.470 "method": "accel_set_options", 00:03:41.470 "params": { 00:03:41.470 "small_cache_size": 128, 00:03:41.470 "large_cache_size": 16, 00:03:41.470 "task_count": 2048, 00:03:41.470 "sequence_count": 2048, 00:03:41.470 "buf_count": 2048 00:03:41.470 } 00:03:41.470 } 00:03:41.470 ] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "bdev", 00:03:41.470 "config": [ 00:03:41.470 { 00:03:41.470 "method": "bdev_set_options", 00:03:41.470 "params": { 00:03:41.470 "bdev_io_pool_size": 65535, 00:03:41.470 "bdev_io_cache_size": 256, 00:03:41.470 "bdev_auto_examine": true, 00:03:41.470 "iobuf_small_cache_size": 128, 00:03:41.470 "iobuf_large_cache_size": 16 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "bdev_raid_set_options", 00:03:41.470 "params": { 00:03:41.470 "process_window_size_kb": 1024 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "bdev_nvme_set_options", 00:03:41.470 "params": { 00:03:41.470 "action_on_timeout": "none", 00:03:41.470 "timeout_us": 0, 00:03:41.470 "timeout_admin_us": 0, 00:03:41.470 "keep_alive_timeout_ms": 10000, 00:03:41.470 "arbitration_burst": 0, 00:03:41.470 "low_priority_weight": 0, 00:03:41.470 "medium_priority_weight": 0, 00:03:41.470 "high_priority_weight": 0, 00:03:41.470 "nvme_adminq_poll_period_us": 10000, 00:03:41.470 "nvme_ioq_poll_period_us": 0, 00:03:41.470 "io_queue_requests": 0, 00:03:41.470 "delay_cmd_submit": true, 00:03:41.470 "transport_retry_count": 4, 00:03:41.470 "bdev_retry_count": 3, 00:03:41.470 "transport_ack_timeout": 0, 00:03:41.470 "ctrlr_loss_timeout_sec": 0, 00:03:41.470 "reconnect_delay_sec": 0, 00:03:41.470 "fast_io_fail_timeout_sec": 0, 00:03:41.470 "disable_auto_failback": false, 00:03:41.470 "generate_uuids": false, 00:03:41.470 "transport_tos": 0, 00:03:41.470 "nvme_error_stat": false, 00:03:41.470 "rdma_srq_size": 0, 00:03:41.470 "io_path_stat": false, 00:03:41.470 "allow_accel_sequence": false, 00:03:41.470 "rdma_max_cq_size": 0, 00:03:41.470 "rdma_cm_event_timeout_ms": 0, 00:03:41.470 "dhchap_digests": [ 00:03:41.470 "sha256", 00:03:41.470 "sha384", 00:03:41.470 "sha512" 00:03:41.470 ], 00:03:41.470 "dhchap_dhgroups": [ 00:03:41.470 "null", 00:03:41.470 "ffdhe2048", 00:03:41.470 "ffdhe3072", 00:03:41.470 "ffdhe4096", 00:03:41.470 "ffdhe6144", 00:03:41.470 "ffdhe8192" 00:03:41.470 ] 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "bdev_nvme_set_hotplug", 00:03:41.470 "params": { 00:03:41.470 "period_us": 100000, 00:03:41.470 "enable": false 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "bdev_wait_for_examine" 00:03:41.470 } 00:03:41.470 ] 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "scsi", 00:03:41.470 "config": null 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "subsystem": "nvmf", 00:03:41.470 "config": [ 00:03:41.470 { 00:03:41.470 "method": "nvmf_set_config", 00:03:41.470 "params": { 00:03:41.470 "discovery_filter": "match_any", 00:03:41.470 "admin_cmd_passthru": { 00:03:41.470 "identify_ctrlr": false 00:03:41.470 } 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "nvmf_set_max_subsystems", 00:03:41.470 "params": { 00:03:41.470 "max_subsystems": 1024 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "nvmf_set_crdt", 00:03:41.470 "params": { 00:03:41.470 "crdt1": 0, 00:03:41.470 "crdt2": 0, 00:03:41.470 "crdt3": 0 00:03:41.470 } 00:03:41.470 }, 00:03:41.470 { 00:03:41.470 "method": "nvmf_create_transport", 00:03:41.470 "params": { 00:03:41.470 "trtype": "TCP", 00:03:41.470 "max_queue_depth": 128, 00:03:41.470 "max_io_qpairs_per_ctrlr": 127, 00:03:41.470 "in_capsule_data_size": 4096, 00:03:41.470 "max_io_size": 131072, 00:03:41.470 "io_unit_size": 131072, 00:03:41.471 "max_aq_depth": 128, 00:03:41.471 "num_shared_buffers": 511, 00:03:41.471 "buf_cache_size": 4294967295, 00:03:41.471 "dif_insert_or_strip": false, 00:03:41.471 "zcopy": false, 00:03:41.471 "c2h_success": true, 00:03:41.471 "sock_priority": 0, 00:03:41.471 "abort_timeout_sec": 1, 00:03:41.471 "ack_timeout": 0, 00:03:41.471 "data_wr_pool_size": 0 00:03:41.471 } 00:03:41.471 } 00:03:41.471 ] 00:03:41.471 }, 00:03:41.471 { 00:03:41.471 "subsystem": "iscsi", 00:03:41.471 "config": [ 00:03:41.471 { 00:03:41.471 "method": "iscsi_set_options", 00:03:41.471 "params": { 00:03:41.471 "node_base": "iqn.2016-06.io.spdk", 00:03:41.471 "max_sessions": 128, 00:03:41.471 "max_connections_per_session": 2, 00:03:41.471 "max_queue_depth": 64, 00:03:41.471 "default_time2wait": 2, 00:03:41.471 "default_time2retain": 20, 00:03:41.471 "first_burst_length": 8192, 00:03:41.471 "immediate_data": true, 00:03:41.471 "allow_duplicated_isid": false, 00:03:41.471 "error_recovery_level": 0, 00:03:41.471 "nop_timeout": 60, 00:03:41.471 "nop_in_interval": 30, 00:03:41.471 "disable_chap": false, 00:03:41.471 "require_chap": false, 00:03:41.471 "mutual_chap": false, 00:03:41.471 "chap_group": 0, 00:03:41.471 "max_large_datain_per_connection": 64, 00:03:41.471 "max_r2t_per_connection": 4, 00:03:41.471 "pdu_pool_size": 36864, 00:03:41.471 "immediate_data_pool_size": 16384, 00:03:41.471 "data_out_pool_size": 2048 00:03:41.471 } 00:03:41.471 } 00:03:41.471 ] 00:03:41.471 } 00:03:41.471 ] 00:03:41.471 } 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 46494 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 46494 ']' 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 46494 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # ps -c -o command 46494 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # tail -1 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:03:41.471 killing process with pid 46494 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 46494' 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 46494 00:03:41.471 10:07:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 46494 00:03:41.729 10:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=46508 00:03:41.729 10:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:41.729 10:07:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 46508 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 46508 ']' 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 46508 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # ps -c -o command 46508 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # tail -1 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:03:46.993 killing process with pid 46508 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 46508' 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 46508 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 46508 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:46.993 00:03:46.993 real 0m6.831s 00:03:46.993 user 0m6.304s 00:03:46.993 sys 0m1.197s 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:46.993 ************************************ 00:03:46.993 END TEST skip_rpc_with_json 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:46.993 ************************************ 00:03:46.993 10:07:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:46.993 10:07:52 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:46.993 10:07:52 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:46.993 10:07:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.993 ************************************ 00:03:46.993 START TEST skip_rpc_with_delay 00:03:46.993 ************************************ 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:03:46.993 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.250 [2024-06-10 10:07:52.598203] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.250 [2024-06-10 10:07:52.599701] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:03:47.250 00:03:47.250 real 0m0.012s 00:03:47.250 user 0m0.003s 00:03:47.250 sys 0m0.009s 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:47.250 10:07:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.250 ************************************ 00:03:47.250 END TEST skip_rpc_with_delay 00:03:47.250 ************************************ 00:03:47.250 10:07:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.250 10:07:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:03:47.250 10:07:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /usr/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:47.250 00:03:47.250 real 0m12.583s 00:03:47.250 user 0m11.421s 00:03:47.250 sys 0m1.908s 00:03:47.250 10:07:52 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:47.250 10:07:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.250 ************************************ 00:03:47.250 END TEST skip_rpc 00:03:47.250 ************************************ 00:03:47.250 10:07:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:47.250 10:07:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:47.250 10:07:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:47.250 10:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:47.250 ************************************ 00:03:47.250 START TEST rpc_client 00:03:47.250 ************************************ 00:03:47.250 10:07:52 rpc_client -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:47.250 * Looking for test storage... 00:03:47.508 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:47.508 10:07:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:47.508 OK 00:03:47.508 10:07:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:47.508 00:03:47.508 real 0m0.201s 00:03:47.508 user 0m0.172s 00:03:47.508 sys 0m0.117s 00:03:47.508 10:07:52 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:47.508 10:07:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:47.508 ************************************ 00:03:47.508 END TEST rpc_client 00:03:47.508 ************************************ 00:03:47.508 10:07:52 -- spdk/autotest.sh@172 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:47.508 10:07:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:47.508 10:07:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:47.508 10:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:47.508 ************************************ 00:03:47.508 START TEST json_config 00:03:47.508 ************************************ 00:03:47.508 10:07:52 json_config -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:47.508 10:07:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:47.508 10:07:53 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:47.508 10:07:53 json_config -- nvmf/common.sh@7 -- # return 0 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:47.508 INFO: JSON configuration test init 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.508 10:07:53 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:47.508 10:07:53 json_config -- json_config/common.sh@9 -- # local app=target 00:03:47.508 10:07:53 json_config -- json_config/common.sh@10 -- # shift 00:03:47.508 10:07:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:47.508 10:07:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:47.508 10:07:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:47.508 10:07:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.508 10:07:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:47.508 10:07:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46667 00:03:47.508 Waiting for target to run... 00:03:47.508 10:07:53 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:47.508 10:07:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:47.508 10:07:53 json_config -- json_config/common.sh@25 -- # waitforlisten 46667 /var/tmp/spdk_tgt.sock 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@830 -- # '[' -z 46667 ']' 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:47.508 10:07:53 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:03:47.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:47.766 10:07:53 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:47.766 10:07:53 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:03:47.766 10:07:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.766 [2024-06-10 10:07:53.112478] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:03:47.766 [2024-06-10 10:07:53.112685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:48.023 EAL: TSC is not safe to use in SMP mode 00:03:48.023 EAL: TSC is not invariant 00:03:48.023 [2024-06-10 10:07:53.374197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.023 [2024-06-10 10:07:53.467980] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:48.023 [2024-06-10 10:07:53.470406] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.956 10:07:54 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:03:48.956 10:07:54 json_config -- common/autotest_common.sh@863 -- # return 0 00:03:48.956 00:03:48.956 10:07:54 json_config -- json_config/common.sh@26 -- # echo '' 00:03:48.956 10:07:54 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:48.956 10:07:54 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:48.956 10:07:54 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:48.956 10:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.956 10:07:54 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:48.956 10:07:54 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:48.956 10:07:54 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:48.956 10:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.956 10:07:54 json_config -- json_config/json_config.sh@273 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:48.956 10:07:54 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:48.956 10:07:54 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:49.212 [2024-06-10 10:07:54.796835] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:49.469 10:07:54 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:49.469 10:07:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:49.469 10:07:54 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:49.469 10:07:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:49.726 10:07:55 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:49.726 10:07:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:03:49.726 10:07:55 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:49.726 10:07:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:49.726 10:07:55 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:03:49.726 10:07:55 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:49.984 10:07:55 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:03:49.984 10:07:55 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:49.984 10:07:55 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:49.984 10:07:55 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:03:49.984 10:07:55 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:03:49.984 10:07:55 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:03:49.984 10:07:55 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:03:50.242 Nvme0n1p0 Nvme0n1p1 00:03:50.242 10:07:55 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:03:50.242 10:07:55 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:03:50.501 [2024-06-10 10:07:55.889301] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:50.501 [2024-06-10 10:07:55.889346] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:50.501 00:03:50.501 10:07:55 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:03:50.501 10:07:55 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:03:50.501 Malloc3 00:03:50.501 10:07:56 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:50.501 10:07:56 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:50.759 [2024-06-10 10:07:56.257315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:50.759 [2024-06-10 10:07:56.257363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.759 [2024-06-10 10:07:56.257387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x828b6f180 00:03:50.759 [2024-06-10 10:07:56.257393] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.759 [2024-06-10 10:07:56.257823] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.759 [2024-06-10 10:07:56.257843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:50.759 PTBdevFromMalloc3 00:03:50.759 10:07:56 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:03:50.759 10:07:56 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:03:51.019 Null0 00:03:51.019 10:07:56 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:03:51.019 10:07:56 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:03:51.343 Malloc0 00:03:51.343 10:07:56 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:03:51.343 10:07:56 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:03:51.343 Malloc1 00:03:51.343 10:07:56 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:03:51.343 10:07:56 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:03:52.282 102400+0 records in 00:03:52.282 102400+0 records out 00:03:52.282 104857600 bytes transferred in 0.722226 secs (145186648 bytes/sec) 00:03:52.282 10:07:57 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:03:52.282 10:07:57 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:03:52.282 aio_disk 00:03:52.282 10:07:57 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:03:52.282 10:07:57 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:52.282 10:07:57 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:52.541 4fd754a7-2711-11ef-b084-113036b5c18d 00:03:52.541 10:07:57 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:03:52.541 10:07:57 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:03:52.541 10:07:57 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:03:52.541 10:07:58 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:03:52.541 10:07:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:03:52.800 10:07:58 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:52.800 10:07:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:53.059 10:07:58 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:53.059 10:07:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:4ff230a2-2711-11ef-b084-113036b5c18d bdev_register:5016336f-2711-11ef-b084-113036b5c18d bdev_register:50324711-2711-11ef-b084-113036b5c18d bdev_register:504ef756-2711-11ef-b084-113036b5c18d 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@71 -- # sort 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:4ff230a2-2711-11ef-b084-113036b5c18d bdev_register:5016336f-2711-11ef-b084-113036b5c18d bdev_register:50324711-2711-11ef-b084-113036b5c18d bdev_register:504ef756-2711-11ef-b084-113036b5c18d 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@72 -- # sort 00:03:53.318 10:07:58 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:03:53.319 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.319 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.319 10:07:58 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:53.319 10:07:58 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:03:53.319 10:07:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:4ff230a2-2711-11ef-b084-113036b5c18d 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:5016336f-2711-11ef-b084-113036b5c18d 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:50324711-2711-11ef-b084-113036b5c18d 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:504ef756-2711-11ef-b084-113036b5c18d 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:03:53.578 10:07:58 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:4ff230a2-2711-11ef-b084-113036b5c18d bdev_register:5016336f-2711-11ef-b084-113036b5c18d bdev_register:50324711-2711-11ef-b084-113036b5c18d bdev_register:504ef756-2711-11ef-b084-113036b5c18d bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\f\f\2\3\0\a\2\-\2\7\1\1\-\1\1\e\f\-\b\0\8\4\-\1\1\3\0\3\6\b\5\c\1\8\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\0\1\6\3\3\6\f\-\2\7\1\1\-\1\1\e\f\-\b\0\8\4\-\1\1\3\0\3\6\b\5\c\1\8\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\0\3\2\4\7\1\1\-\2\7\1\1\-\1\1\e\f\-\b\0\8\4\-\1\1\3\0\3\6\b\5\c\1\8\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\0\4\e\f\7\5\6\-\2\7\1\1\-\1\1\e\f\-\b\0\8\4\-\1\1\3\0\3\6\b\5\c\1\8\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@86 -- # cat 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:4ff230a2-2711-11ef-b084-113036b5c18d bdev_register:5016336f-2711-11ef-b084-113036b5c18d bdev_register:50324711-2711-11ef-b084-113036b5c18d bdev_register:504ef756-2711-11ef-b084-113036b5c18d bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:03:53.579 Expected events matched: 00:03:53.579 bdev_register:4ff230a2-2711-11ef-b084-113036b5c18d 00:03:53.579 bdev_register:5016336f-2711-11ef-b084-113036b5c18d 00:03:53.579 bdev_register:50324711-2711-11ef-b084-113036b5c18d 00:03:53.579 bdev_register:504ef756-2711-11ef-b084-113036b5c18d 00:03:53.579 bdev_register:Malloc0 00:03:53.579 bdev_register:Malloc0p0 00:03:53.579 bdev_register:Malloc0p1 00:03:53.579 bdev_register:Malloc0p2 00:03:53.579 bdev_register:Malloc1 00:03:53.579 bdev_register:Malloc3 00:03:53.579 bdev_register:Null0 00:03:53.579 bdev_register:Nvme0n1 00:03:53.579 bdev_register:Nvme0n1p0 00:03:53.579 bdev_register:Nvme0n1p1 00:03:53.579 bdev_register:PTBdevFromMalloc3 00:03:53.579 bdev_register:aio_disk 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:03:53.579 10:07:58 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:53.579 10:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:53.579 10:07:58 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:53.579 10:07:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:53.579 10:07:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.579 10:07:58 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.837 MallocBdevForConfigChangeCheck 00:03:53.837 10:07:59 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:53.837 10:07:59 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:53.837 10:07:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.837 10:07:59 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:53.837 10:07:59 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.096 INFO: shutting down applications... 00:03:54.096 10:07:59 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:54.096 10:07:59 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:54.096 10:07:59 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:54.096 10:07:59 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:54.096 10:07:59 json_config -- json_config/json_config.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:54.096 [2024-06-10 10:07:59.677481] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:03:54.354 Calling clear_iscsi_subsystem 00:03:54.354 Calling clear_nvmf_subsystem 00:03:54.354 Calling clear_bdev_subsystem 00:03:54.354 10:07:59 json_config -- json_config/json_config.sh@337 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:54.354 10:07:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:54.354 10:07:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:54.354 10:07:59 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.354 10:07:59 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:54.354 10:07:59 json_config -- json_config/json_config.sh@345 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:54.919 10:08:00 json_config -- json_config/json_config.sh@345 -- # break 00:03:54.919 10:08:00 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:54.919 10:08:00 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:54.919 10:08:00 json_config -- json_config/common.sh@31 -- # local app=target 00:03:54.919 10:08:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:54.919 10:08:00 json_config -- json_config/common.sh@35 -- # [[ -n 46667 ]] 00:03:54.919 10:08:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 46667 00:03:54.919 10:08:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:54.919 10:08:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.919 10:08:00 json_config -- json_config/common.sh@41 -- # kill -0 46667 00:03:54.919 10:08:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:55.483 10:08:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:55.483 10:08:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:55.483 10:08:00 json_config -- json_config/common.sh@41 -- # kill -0 46667 00:03:55.483 10:08:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:55.483 10:08:00 json_config -- json_config/common.sh@43 -- # break 00:03:55.483 10:08:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:55.483 SPDK target shutdown done 00:03:55.483 10:08:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:55.483 INFO: relaunching applications... 00:03:55.483 10:08:00 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:55.483 10:08:00 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.483 10:08:00 json_config -- json_config/common.sh@9 -- # local app=target 00:03:55.483 10:08:00 json_config -- json_config/common.sh@10 -- # shift 00:03:55.483 10:08:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:55.483 10:08:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:55.483 10:08:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:55.483 10:08:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.483 10:08:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:55.483 10:08:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46853 00:03:55.483 10:08:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:55.483 Waiting for target to run... 00:03:55.483 10:08:00 json_config -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.483 10:08:00 json_config -- json_config/common.sh@25 -- # waitforlisten 46853 /var/tmp/spdk_tgt.sock 00:03:55.483 10:08:00 json_config -- common/autotest_common.sh@830 -- # '[' -z 46853 ']' 00:03:55.483 10:08:00 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:55.483 10:08:00 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:03:55.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:55.483 10:08:00 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:55.483 10:08:00 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:03:55.484 10:08:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.484 [2024-06-10 10:08:00.820817] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:03:55.484 [2024-06-10 10:08:00.821016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:55.484 EAL: TSC is not safe to use in SMP mode 00:03:55.484 EAL: TSC is not invariant 00:03:55.484 [2024-06-10 10:08:01.048600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.742 [2024-06-10 10:08:01.124177] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:55.742 [2024-06-10 10:08:01.126304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.742 [2024-06-10 10:08:01.262829] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:55.742 [2024-06-10 10:08:01.262896] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:55.742 [2024-06-10 10:08:01.270818] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:55.742 [2024-06-10 10:08:01.270843] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:55.742 [2024-06-10 10:08:01.278829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:55.742 [2024-06-10 10:08:01.278850] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:03:55.742 [2024-06-10 10:08:01.278857] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:03:55.742 [2024-06-10 10:08:01.286831] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:56.001 [2024-06-10 10:08:01.356496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:56.001 [2024-06-10 10:08:01.356545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:56.001 [2024-06-10 10:08:01.356553] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82df83780 00:03:56.001 [2024-06-10 10:08:01.356560] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:56.001 [2024-06-10 10:08:01.356618] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:56.001 [2024-06-10 10:08:01.356628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:56.259 10:08:01 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:03:56.259 10:08:01 json_config -- common/autotest_common.sh@863 -- # return 0 00:03:56.259 00:03:56.259 10:08:01 json_config -- json_config/common.sh@26 -- # echo '' 00:03:56.259 10:08:01 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:56.259 INFO: Checking if target configuration is the same... 00:03:56.259 10:08:01 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:56.259 10:08:01 json_config -- json_config/json_config.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.cABbpy /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.259 + '[' 2 -ne 2 ']' 00:03:56.259 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:56.259 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:56.259 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:56.259 +++ basename /tmp//sh-np.cABbpy 00:03:56.259 ++ mktemp /tmp/sh-np.cABbpy.XXX 00:03:56.259 + tmp_file_1=/tmp/sh-np.cABbpy.WOi 00:03:56.259 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.259 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.259 + tmp_file_2=/tmp/spdk_tgt_config.json.KBn 00:03:56.259 + ret=0 00:03:56.259 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.259 10:08:01 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:56.259 10:08:01 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.518 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.518 + diff -u /tmp/sh-np.cABbpy.WOi /tmp/spdk_tgt_config.json.KBn 00:03:56.518 INFO: JSON config files are the same 00:03:56.518 + echo 'INFO: JSON config files are the same' 00:03:56.518 + rm /tmp/sh-np.cABbpy.WOi /tmp/spdk_tgt_config.json.KBn 00:03:56.518 + exit 0 00:03:56.776 10:08:02 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:56.776 INFO: changing configuration and checking if this can be detected... 00:03:56.776 10:08:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.776 10:08:02 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.776 10:08:02 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.776 10:08:02 json_config -- json_config/json_config.sh@387 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.PJ7dGG /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.776 + '[' 2 -ne 2 ']' 00:03:56.776 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:56.776 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:56.776 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:56.776 +++ basename /tmp//sh-np.PJ7dGG 00:03:56.776 ++ mktemp /tmp/sh-np.PJ7dGG.XXX 00:03:56.776 + tmp_file_1=/tmp/sh-np.PJ7dGG.0sY 00:03:56.776 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.776 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.776 + tmp_file_2=/tmp/spdk_tgt_config.json.inm 00:03:56.776 + ret=0 00:03:56.776 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.776 10:08:02 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:56.776 10:08:02 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.035 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:57.294 + diff -u /tmp/sh-np.PJ7dGG.0sY /tmp/spdk_tgt_config.json.inm 00:03:57.294 + ret=1 00:03:57.294 + echo '=== Start of file: /tmp/sh-np.PJ7dGG.0sY ===' 00:03:57.294 + cat /tmp/sh-np.PJ7dGG.0sY 00:03:57.294 + echo '=== End of file: /tmp/sh-np.PJ7dGG.0sY ===' 00:03:57.294 + echo '' 00:03:57.294 + echo '=== Start of file: /tmp/spdk_tgt_config.json.inm ===' 00:03:57.294 + cat /tmp/spdk_tgt_config.json.inm 00:03:57.294 + echo '=== End of file: /tmp/spdk_tgt_config.json.inm ===' 00:03:57.294 + echo '' 00:03:57.294 + rm /tmp/sh-np.PJ7dGG.0sY /tmp/spdk_tgt_config.json.inm 00:03:57.294 + exit 1 00:03:57.294 INFO: configuration change detected. 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:57.294 10:08:02 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:57.294 10:08:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@317 -- # [[ -n 46853 ]] 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:57.294 10:08:02 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:57.294 10:08:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:03:57.294 10:08:02 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:03:57.294 10:08:02 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:03:57.553 10:08:02 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:03:57.553 10:08:02 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:03:57.553 10:08:03 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:03:57.553 10:08:03 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:03:57.812 10:08:03 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:03:57.812 10:08:03 json_config -- json_config/common.sh@57 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:03:58.070 10:08:03 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:58.070 10:08:03 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:03:58.070 10:08:03 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:58.070 10:08:03 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.070 10:08:03 json_config -- json_config/json_config.sh@323 -- # killprocess 46853 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@949 -- # '[' -z 46853 ']' 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@953 -- # kill -0 46853 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@954 -- # uname 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@957 -- # ps -c -o command 46853 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@957 -- # tail -1 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:03:58.070 killing process with pid 46853 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 46853' 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@968 -- # kill 46853 00:03:58.070 10:08:03 json_config -- common/autotest_common.sh@973 -- # wait 46853 00:03:58.329 10:08:03 json_config -- json_config/json_config.sh@326 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:58.329 10:08:03 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:58.329 10:08:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:58.329 10:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.329 10:08:03 json_config -- json_config/json_config.sh@328 -- # return 0 00:03:58.329 INFO: Success 00:03:58.329 10:08:03 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:58.329 00:03:58.329 real 0m10.897s 00:03:58.329 user 0m16.503s 00:03:58.329 sys 0m1.754s 00:03:58.329 10:08:03 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:58.329 ************************************ 00:03:58.329 10:08:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.329 END TEST json_config 00:03:58.329 ************************************ 00:03:58.329 10:08:03 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:58.330 10:08:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:58.330 10:08:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:58.330 10:08:03 -- common/autotest_common.sh@10 -- # set +x 00:03:58.330 ************************************ 00:03:58.330 START TEST json_config_extra_key 00:03:58.330 ************************************ 00:03:58.330 10:08:03 json_config_extra_key -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:58.589 10:08:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:58.589 10:08:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:58.589 10:08:03 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.589 INFO: launching applications... 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:58.589 10:08:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46982 00:03:58.589 Waiting for target to run... 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46982 /var/tmp/spdk_tgt.sock 00:03:58.589 10:08:03 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 46982 ']' 00:03:58.589 10:08:03 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.589 10:08:03 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:03:58.589 10:08:03 json_config_extra_key -- json_config/common.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.589 10:08:03 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.589 10:08:03 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:03:58.589 10:08:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.589 [2024-06-10 10:08:04.006037] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:03:58.589 [2024-06-10 10:08:04.006332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:58.849 EAL: TSC is not safe to use in SMP mode 00:03:58.849 EAL: TSC is not invariant 00:03:58.849 [2024-06-10 10:08:04.227634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.849 [2024-06-10 10:08:04.304017] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:58.849 [2024-06-10 10:08:04.306054] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.785 10:08:05 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:03:59.785 10:08:05 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:03:59.785 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:59.785 INFO: shutting down applications... 00:03:59.785 10:08:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:59.785 10:08:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46982 ]] 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46982 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46982 00:03:59.785 10:08:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46982 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:00.044 10:08:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:00.044 SPDK target shutdown done 00:04:00.044 Success 00:04:00.044 10:08:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:00.044 00:04:00.044 real 0m1.739s 00:04:00.044 user 0m1.613s 00:04:00.044 sys 0m0.398s 00:04:00.044 10:08:05 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:00.044 10:08:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:00.044 ************************************ 00:04:00.044 END TEST json_config_extra_key 00:04:00.044 ************************************ 00:04:00.044 10:08:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.044 10:08:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:00.044 10:08:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:00.044 10:08:05 -- common/autotest_common.sh@10 -- # set +x 00:04:00.044 ************************************ 00:04:00.044 START TEST alias_rpc 00:04:00.044 ************************************ 00:04:00.044 10:08:05 alias_rpc -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.303 * Looking for test storage... 00:04:00.303 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:00.303 10:08:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.303 10:08:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=47040 00:04:00.303 10:08:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 47040 00:04:00.303 10:08:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.303 10:08:05 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 47040 ']' 00:04:00.303 10:08:05 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.303 10:08:05 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:00.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.303 10:08:05 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.303 10:08:05 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:00.303 10:08:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.303 [2024-06-10 10:08:05.807524] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:00.303 [2024-06-10 10:08:05.807841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:00.926 EAL: TSC is not safe to use in SMP mode 00:04:00.926 EAL: TSC is not invariant 00:04:00.926 [2024-06-10 10:08:06.287139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.926 [2024-06-10 10:08:06.361818] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:00.926 [2024-06-10 10:08:06.363840] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.184 10:08:06 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:01.184 10:08:06 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:04:01.184 10:08:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:01.443 10:08:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 47040 00:04:01.443 10:08:07 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 47040 ']' 00:04:01.443 10:08:07 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 47040 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@957 -- # ps -c -o command 47040 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@957 -- # tail -1 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:04:01.702 killing process with pid 47040 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 47040' 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@968 -- # kill 47040 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@973 -- # wait 47040 00:04:01.702 00:04:01.702 real 0m1.672s 00:04:01.702 user 0m1.728s 00:04:01.702 sys 0m0.724s 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:01.702 10:08:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.702 ************************************ 00:04:01.702 END TEST alias_rpc 00:04:01.702 ************************************ 00:04:01.960 10:08:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:01.960 10:08:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:01.960 10:08:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:01.960 10:08:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:01.960 10:08:07 -- common/autotest_common.sh@10 -- # set +x 00:04:01.960 ************************************ 00:04:01.960 START TEST spdkcli_tcp 00:04:01.960 ************************************ 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:01.960 * Looking for test storage... 00:04:01.960 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=47105 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.960 10:08:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 47105 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 47105 ']' 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:01.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:01.960 10:08:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.960 [2024-06-10 10:08:07.525493] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:01.960 [2024-06-10 10:08:07.525657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:02.527 EAL: TSC is not safe to use in SMP mode 00:04:02.527 EAL: TSC is not invariant 00:04:02.527 [2024-06-10 10:08:08.016431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.527 [2024-06-10 10:08:08.100764] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:02.527 [2024-06-10 10:08:08.100825] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:02.527 [2024-06-10 10:08:08.103615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.527 [2024-06-10 10:08:08.103608] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.094 10:08:08 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:03.094 10:08:08 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:04:03.094 10:08:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=47113 00:04:03.094 10:08:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:03.094 10:08:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:03.353 [ 00:04:03.353 "spdk_get_version", 00:04:03.353 "rpc_get_methods", 00:04:03.353 "env_dpdk_get_mem_stats", 00:04:03.353 "trace_get_info", 00:04:03.353 "trace_get_tpoint_group_mask", 00:04:03.353 "trace_disable_tpoint_group", 00:04:03.353 "trace_enable_tpoint_group", 00:04:03.353 "trace_clear_tpoint_mask", 00:04:03.353 "trace_set_tpoint_mask", 00:04:03.353 "notify_get_notifications", 00:04:03.353 "notify_get_types", 00:04:03.353 "accel_get_stats", 00:04:03.353 "accel_set_options", 00:04:03.353 "accel_set_driver", 00:04:03.353 "accel_crypto_key_destroy", 00:04:03.353 "accel_crypto_keys_get", 00:04:03.353 "accel_crypto_key_create", 00:04:03.353 "accel_assign_opc", 00:04:03.353 "accel_get_module_info", 00:04:03.353 "accel_get_opc_assignments", 00:04:03.353 "bdev_get_histogram", 00:04:03.353 "bdev_enable_histogram", 00:04:03.353 "bdev_set_qos_limit", 00:04:03.353 "bdev_set_qd_sampling_period", 00:04:03.353 "bdev_get_bdevs", 00:04:03.353 "bdev_reset_iostat", 00:04:03.353 "bdev_get_iostat", 00:04:03.353 "bdev_examine", 00:04:03.353 "bdev_wait_for_examine", 00:04:03.353 "bdev_set_options", 00:04:03.353 "keyring_get_keys", 00:04:03.353 "framework_get_pci_devices", 00:04:03.353 "framework_get_config", 00:04:03.353 "framework_get_subsystems", 00:04:03.353 "sock_get_default_impl", 00:04:03.353 "sock_set_default_impl", 00:04:03.353 "sock_impl_set_options", 00:04:03.353 "sock_impl_get_options", 00:04:03.353 "thread_set_cpumask", 00:04:03.353 "framework_get_scheduler", 00:04:03.353 "framework_set_scheduler", 00:04:03.353 "framework_get_reactors", 00:04:03.353 "thread_get_io_channels", 00:04:03.353 "thread_get_pollers", 00:04:03.353 "thread_get_stats", 00:04:03.354 "framework_monitor_context_switch", 00:04:03.354 "spdk_kill_instance", 00:04:03.354 "log_enable_timestamps", 00:04:03.354 "log_get_flags", 00:04:03.354 "log_clear_flag", 00:04:03.354 "log_set_flag", 00:04:03.354 "log_get_level", 00:04:03.354 "log_set_level", 00:04:03.354 "log_get_print_level", 00:04:03.354 "log_set_print_level", 00:04:03.354 "framework_enable_cpumask_locks", 00:04:03.354 "framework_disable_cpumask_locks", 00:04:03.354 "framework_wait_init", 00:04:03.354 "framework_start_init", 00:04:03.354 "iobuf_get_stats", 00:04:03.354 "iobuf_set_options", 00:04:03.354 "vmd_rescan", 00:04:03.354 "vmd_remove_device", 00:04:03.354 "vmd_enable", 00:04:03.354 "nvmf_stop_mdns_prr", 00:04:03.354 "nvmf_publish_mdns_prr", 00:04:03.354 "nvmf_subsystem_get_listeners", 00:04:03.354 "nvmf_subsystem_get_qpairs", 00:04:03.354 "nvmf_subsystem_get_controllers", 00:04:03.354 "nvmf_get_stats", 00:04:03.354 "nvmf_get_transports", 00:04:03.354 "nvmf_create_transport", 00:04:03.354 "nvmf_get_targets", 00:04:03.354 "nvmf_delete_target", 00:04:03.354 "nvmf_create_target", 00:04:03.354 "nvmf_subsystem_allow_any_host", 00:04:03.354 "nvmf_subsystem_remove_host", 00:04:03.354 "nvmf_subsystem_add_host", 00:04:03.354 "nvmf_ns_remove_host", 00:04:03.354 "nvmf_ns_add_host", 00:04:03.354 "nvmf_subsystem_remove_ns", 00:04:03.354 "nvmf_subsystem_add_ns", 00:04:03.354 "nvmf_subsystem_listener_set_ana_state", 00:04:03.354 "nvmf_discovery_get_referrals", 00:04:03.354 "nvmf_discovery_remove_referral", 00:04:03.354 "nvmf_discovery_add_referral", 00:04:03.354 "nvmf_subsystem_remove_listener", 00:04:03.354 "nvmf_subsystem_add_listener", 00:04:03.354 "nvmf_delete_subsystem", 00:04:03.354 "nvmf_create_subsystem", 00:04:03.354 "nvmf_get_subsystems", 00:04:03.354 "nvmf_set_crdt", 00:04:03.354 "nvmf_set_config", 00:04:03.354 "nvmf_set_max_subsystems", 00:04:03.354 "scsi_get_devices", 00:04:03.354 "iscsi_get_histogram", 00:04:03.354 "iscsi_enable_histogram", 00:04:03.354 "iscsi_set_options", 00:04:03.354 "iscsi_get_auth_groups", 00:04:03.354 "iscsi_auth_group_remove_secret", 00:04:03.354 "iscsi_auth_group_add_secret", 00:04:03.354 "iscsi_delete_auth_group", 00:04:03.354 "iscsi_create_auth_group", 00:04:03.354 "iscsi_set_discovery_auth", 00:04:03.354 "iscsi_get_options", 00:04:03.354 "iscsi_target_node_request_logout", 00:04:03.354 "iscsi_target_node_set_redirect", 00:04:03.354 "iscsi_target_node_set_auth", 00:04:03.354 "iscsi_target_node_add_lun", 00:04:03.354 "iscsi_get_stats", 00:04:03.354 "iscsi_get_connections", 00:04:03.354 "iscsi_portal_group_set_auth", 00:04:03.354 "iscsi_start_portal_group", 00:04:03.354 "iscsi_delete_portal_group", 00:04:03.354 "iscsi_create_portal_group", 00:04:03.354 "iscsi_get_portal_groups", 00:04:03.354 "iscsi_delete_target_node", 00:04:03.354 "iscsi_target_node_remove_pg_ig_maps", 00:04:03.354 "iscsi_target_node_add_pg_ig_maps", 00:04:03.354 "iscsi_create_target_node", 00:04:03.354 "iscsi_get_target_nodes", 00:04:03.354 "iscsi_delete_initiator_group", 00:04:03.354 "iscsi_initiator_group_remove_initiators", 00:04:03.354 "iscsi_initiator_group_add_initiators", 00:04:03.354 "iscsi_create_initiator_group", 00:04:03.354 "iscsi_get_initiator_groups", 00:04:03.354 "keyring_file_remove_key", 00:04:03.354 "keyring_file_add_key", 00:04:03.354 "iaa_scan_accel_module", 00:04:03.354 "dsa_scan_accel_module", 00:04:03.354 "ioat_scan_accel_module", 00:04:03.354 "accel_error_inject_error", 00:04:03.354 "bdev_aio_delete", 00:04:03.354 "bdev_aio_rescan", 00:04:03.354 "bdev_aio_create", 00:04:03.354 "blobfs_create", 00:04:03.354 "blobfs_detect", 00:04:03.354 "blobfs_set_cache_size", 00:04:03.354 "bdev_zone_block_delete", 00:04:03.354 "bdev_zone_block_create", 00:04:03.354 "bdev_delay_delete", 00:04:03.354 "bdev_delay_create", 00:04:03.354 "bdev_delay_update_latency", 00:04:03.354 "bdev_split_delete", 00:04:03.354 "bdev_split_create", 00:04:03.354 "bdev_error_inject_error", 00:04:03.354 "bdev_error_delete", 00:04:03.354 "bdev_error_create", 00:04:03.354 "bdev_raid_set_options", 00:04:03.354 "bdev_raid_remove_base_bdev", 00:04:03.354 "bdev_raid_add_base_bdev", 00:04:03.354 "bdev_raid_delete", 00:04:03.354 "bdev_raid_create", 00:04:03.354 "bdev_raid_get_bdevs", 00:04:03.354 "bdev_lvol_set_parent_bdev", 00:04:03.354 "bdev_lvol_set_parent", 00:04:03.354 "bdev_lvol_check_shallow_copy", 00:04:03.354 "bdev_lvol_start_shallow_copy", 00:04:03.354 "bdev_lvol_grow_lvstore", 00:04:03.354 "bdev_lvol_get_lvols", 00:04:03.354 "bdev_lvol_get_lvstores", 00:04:03.354 "bdev_lvol_delete", 00:04:03.354 "bdev_lvol_set_read_only", 00:04:03.354 "bdev_lvol_resize", 00:04:03.354 "bdev_lvol_decouple_parent", 00:04:03.354 "bdev_lvol_inflate", 00:04:03.354 "bdev_lvol_rename", 00:04:03.354 "bdev_lvol_clone_bdev", 00:04:03.354 "bdev_lvol_clone", 00:04:03.354 "bdev_lvol_snapshot", 00:04:03.354 "bdev_lvol_create", 00:04:03.354 "bdev_lvol_delete_lvstore", 00:04:03.354 "bdev_lvol_rename_lvstore", 00:04:03.354 "bdev_lvol_create_lvstore", 00:04:03.354 "bdev_passthru_delete", 00:04:03.354 "bdev_passthru_create", 00:04:03.354 "bdev_nvme_send_cmd", 00:04:03.354 "bdev_nvme_get_path_iostat", 00:04:03.354 "bdev_nvme_get_mdns_discovery_info", 00:04:03.354 "bdev_nvme_stop_mdns_discovery", 00:04:03.354 "bdev_nvme_start_mdns_discovery", 00:04:03.354 "bdev_nvme_set_multipath_policy", 00:04:03.354 "bdev_nvme_set_preferred_path", 00:04:03.354 "bdev_nvme_get_io_paths", 00:04:03.354 "bdev_nvme_remove_error_injection", 00:04:03.354 "bdev_nvme_add_error_injection", 00:04:03.354 "bdev_nvme_get_discovery_info", 00:04:03.354 "bdev_nvme_stop_discovery", 00:04:03.354 "bdev_nvme_start_discovery", 00:04:03.354 "bdev_nvme_get_controller_health_info", 00:04:03.354 "bdev_nvme_disable_controller", 00:04:03.354 "bdev_nvme_enable_controller", 00:04:03.354 "bdev_nvme_reset_controller", 00:04:03.354 "bdev_nvme_get_transport_statistics", 00:04:03.354 "bdev_nvme_apply_firmware", 00:04:03.354 "bdev_nvme_detach_controller", 00:04:03.354 "bdev_nvme_get_controllers", 00:04:03.354 "bdev_nvme_attach_controller", 00:04:03.354 "bdev_nvme_set_hotplug", 00:04:03.354 "bdev_nvme_set_options", 00:04:03.354 "bdev_null_resize", 00:04:03.354 "bdev_null_delete", 00:04:03.354 "bdev_null_create", 00:04:03.354 "bdev_malloc_delete", 00:04:03.354 "bdev_malloc_create" 00:04:03.354 ] 00:04:03.354 10:08:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.354 10:08:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:03.354 10:08:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 47105 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 47105 ']' 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 47105 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@957 -- # ps -c -o command 47105 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@957 -- # tail -1 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:04:03.354 killing process with pid 47105 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 47105' 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 47105 00:04:03.354 10:08:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 47105 00:04:03.613 00:04:03.613 real 0m1.796s 00:04:03.613 user 0m2.863s 00:04:03.613 sys 0m0.778s 00:04:03.613 10:08:09 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:03.613 10:08:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.613 ************************************ 00:04:03.613 END TEST spdkcli_tcp 00:04:03.613 ************************************ 00:04:03.613 10:08:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:03.613 10:08:09 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:03.613 10:08:09 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:03.613 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:04:03.613 ************************************ 00:04:03.613 START TEST dpdk_mem_utility 00:04:03.613 ************************************ 00:04:03.613 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:03.871 * Looking for test storage... 00:04:03.871 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:03.871 10:08:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:03.871 10:08:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=47180 00:04:03.871 10:08:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:03.871 10:08:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 47180 00:04:03.871 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 47180 ']' 00:04:03.871 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.871 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:03.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.871 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.871 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:03.871 10:08:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.871 [2024-06-10 10:08:09.329677] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:03.871 [2024-06-10 10:08:09.329899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:04.437 EAL: TSC is not safe to use in SMP mode 00:04:04.437 EAL: TSC is not invariant 00:04:04.437 [2024-06-10 10:08:09.849032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.437 [2024-06-10 10:08:09.948134] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:04.437 [2024-06-10 10:08:09.950982] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.005 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:05.005 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:04:05.005 10:08:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:05.005 10:08:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:05.005 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:05.005 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.005 { 00:04:05.005 "filename": "/tmp/spdk_mem_dump.txt" 00:04:05.005 } 00:04:05.005 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:05.005 10:08:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:05.005 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:05.005 1 heaps totaling size 2048.000000 MiB 00:04:05.005 size: 2048.000000 MiB heap id: 0 00:04:05.005 end heaps---------- 00:04:05.005 8 mempools totaling size 592.563660 MiB 00:04:05.005 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:05.005 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:05.005 size: 84.500549 MiB name: bdev_io_47180 00:04:05.005 size: 51.008362 MiB name: evtpool_47180 00:04:05.005 size: 50.000549 MiB name: msgpool_47180 00:04:05.005 size: 21.758911 MiB name: PDU_Pool 00:04:05.005 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:05.005 size: 0.026123 MiB name: Session_Pool 00:04:05.005 end mempools------- 00:04:05.005 6 memzones totaling size 4.142822 MiB 00:04:05.005 size: 1.000366 MiB name: RG_ring_0_47180 00:04:05.005 size: 1.000366 MiB name: RG_ring_1_47180 00:04:05.005 size: 1.000366 MiB name: RG_ring_4_47180 00:04:05.005 size: 1.000366 MiB name: RG_ring_5_47180 00:04:05.005 size: 0.125366 MiB name: RG_ring_2_47180 00:04:05.005 size: 0.015991 MiB name: RG_ring_3_47180 00:04:05.005 end memzones------- 00:04:05.005 10:08:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:05.266 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:04:05.266 list of free elements. size: 1254.071899 MiB 00:04:05.266 element at address: 0x1060000000 with size: 1254.001099 MiB 00:04:05.266 element at address: 0x10c8000000 with size: 0.070129 MiB 00:04:05.266 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:04:05.266 list of standard malloc elements. size: 197.217957 MiB 00:04:05.266 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:04:05.266 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:04:05.266 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:04:05.266 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:04:05.266 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:04:05.266 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:04:05.266 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:04:05.266 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:04:05.266 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:04:05.266 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:04:05.266 list of memzone associated elements. size: 596.710144 MiB 00:04:05.266 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:04:05.266 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:05.266 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:04:05.266 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:05.266 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:04:05.266 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_47180_0 00:04:05.266 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:04:05.266 associated memzone info: size: 48.000000 MiB name: MP_evtpool_47180_0 00:04:05.266 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:04:05.266 associated memzone info: size: 48.000000 MiB name: MP_msgpool_47180_0 00:04:05.266 element at address: 0x10c683d780 with size: 20.250671 MiB 00:04:05.266 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:05.266 element at address: 0x10ae700680 with size: 18.000671 MiB 00:04:05.266 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:05.266 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:04:05.266 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_47180 00:04:05.266 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:04:05.266 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_47180 00:04:05.266 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:04:05.266 associated memzone info: size: 1.007996 MiB name: MP_evtpool_47180 00:04:05.266 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:04:05.266 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:05.266 element at address: 0x10c673b640 with size: 1.008118 MiB 00:04:05.266 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:05.266 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:04:05.266 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:05.266 element at address: 0x10af980b40 with size: 1.008118 MiB 00:04:05.266 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:05.266 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:04:05.266 associated memzone info: size: 1.000366 MiB name: RG_ring_0_47180 00:04:05.266 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:04:05.266 associated memzone info: size: 1.000366 MiB name: RG_ring_1_47180 00:04:05.266 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:04:05.266 associated memzone info: size: 1.000366 MiB name: RG_ring_4_47180 00:04:05.266 element at address: 0x10ae600480 with size: 1.000488 MiB 00:04:05.266 associated memzone info: size: 1.000366 MiB name: RG_ring_5_47180 00:04:05.266 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:04:05.266 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_47180 00:04:05.266 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:04:05.266 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:05.266 element at address: 0x10af900940 with size: 0.500488 MiB 00:04:05.266 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:05.266 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:04:05.266 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:05.266 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:04:05.266 associated memzone info: size: 0.125366 MiB name: RG_ring_2_47180 00:04:05.267 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:04:05.267 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:05.267 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:04:05.267 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:05.267 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:04:05.267 associated memzone info: size: 0.015991 MiB name: RG_ring_3_47180 00:04:05.267 element at address: 0x10c8018080 with size: 0.002441 MiB 00:04:05.267 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:05.267 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:04:05.267 associated memzone info: size: 0.000183 MiB name: MP_msgpool_47180 00:04:05.267 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:04:05.267 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_47180 00:04:05.267 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:04:05.267 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:05.267 10:08:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:05.267 10:08:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 47180 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 47180 ']' 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 47180 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # ps -c -o command 47180 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # tail -1 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:04:05.267 killing process with pid 47180 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 47180' 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 47180 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 47180 00:04:05.267 00:04:05.267 real 0m1.696s 00:04:05.267 user 0m1.751s 00:04:05.267 sys 0m0.776s 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:05.267 10:08:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.267 ************************************ 00:04:05.267 END TEST dpdk_mem_utility 00:04:05.267 ************************************ 00:04:05.534 10:08:10 -- spdk/autotest.sh@181 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.534 10:08:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:05.534 10:08:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:05.534 10:08:10 -- common/autotest_common.sh@10 -- # set +x 00:04:05.534 ************************************ 00:04:05.534 START TEST event 00:04:05.534 ************************************ 00:04:05.534 10:08:10 event -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.534 * Looking for test storage... 00:04:05.534 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:04:05.534 10:08:11 event -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:05.534 10:08:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:05.534 10:08:11 event -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.534 10:08:11 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:04:05.534 10:08:11 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:05.534 10:08:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.534 ************************************ 00:04:05.534 START TEST event_perf 00:04:05.534 ************************************ 00:04:05.534 10:08:11 event.event_perf -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.534 Running I/O for 1 seconds...[2024-06-10 10:08:11.101442] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:05.534 [2024-06-10 10:08:11.101732] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:06.101 EAL: TSC is not safe to use in SMP mode 00:04:06.101 EAL: TSC is not invariant 00:04:06.101 [2024-06-10 10:08:11.579217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.101 [2024-06-10 10:08:11.653957] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:06.101 [2024-06-10 10:08:11.654010] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:06.101 [2024-06-10 10:08:11.654018] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:06.101 [2024-06-10 10:08:11.654025] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:06.101 [2024-06-10 10:08:11.657745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.101 [2024-06-10 10:08:11.657923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.101 Running I/O for 1 seconds...[2024-06-10 10:08:11.657872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.101 [2024-06-10 10:08:11.657918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.475 00:04:07.475 lcore 0: 2424948 00:04:07.475 lcore 1: 2424952 00:04:07.475 lcore 2: 2424941 00:04:07.475 lcore 3: 2424944 00:04:07.475 done. 00:04:07.475 00:04:07.475 real 0m1.676s 00:04:07.475 user 0m4.165s 00:04:07.475 sys 0m0.506s 00:04:07.475 10:08:12 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:07.475 10:08:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.475 ************************************ 00:04:07.475 END TEST event_perf 00:04:07.475 ************************************ 00:04:07.475 10:08:12 event -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:07.475 10:08:12 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:07.475 10:08:12 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.475 10:08:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.475 ************************************ 00:04:07.475 START TEST event_reactor 00:04:07.475 ************************************ 00:04:07.475 10:08:12 event.event_reactor -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:07.475 [2024-06-10 10:08:12.817663] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:07.475 [2024-06-10 10:08:12.817942] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:07.734 EAL: TSC is not safe to use in SMP mode 00:04:07.734 EAL: TSC is not invariant 00:04:07.734 [2024-06-10 10:08:13.284028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.993 [2024-06-10 10:08:13.364385] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:07.993 [2024-06-10 10:08:13.366465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.929 test_start 00:04:08.929 oneshot 00:04:08.929 tick 100 00:04:08.929 tick 100 00:04:08.929 tick 250 00:04:08.929 tick 100 00:04:08.929 tick 100 00:04:08.929 tick 100 00:04:08.929 tick 250 00:04:08.929 tick 500 00:04:08.929 tick 100 00:04:08.929 tick 100 00:04:08.929 tick 250 00:04:08.929 tick 100 00:04:08.929 tick 100 00:04:08.929 test_end 00:04:08.929 00:04:08.929 real 0m1.669s 00:04:08.929 user 0m1.171s 00:04:08.929 sys 0m0.496s 00:04:08.929 10:08:14 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:08.929 10:08:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:08.929 ************************************ 00:04:08.929 END TEST event_reactor 00:04:08.929 ************************************ 00:04:08.929 10:08:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.929 10:08:14 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:08.929 10:08:14 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:08.929 10:08:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.188 ************************************ 00:04:09.188 START TEST event_reactor_perf 00:04:09.188 ************************************ 00:04:09.188 10:08:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.188 [2024-06-10 10:08:14.537446] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:09.188 [2024-06-10 10:08:14.537764] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:09.446 EAL: TSC is not safe to use in SMP mode 00:04:09.446 EAL: TSC is not invariant 00:04:09.706 [2024-06-10 10:08:15.050580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.706 [2024-06-10 10:08:15.128020] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:09.706 [2024-06-10 10:08:15.129928] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.085 test_start 00:04:11.085 test_end 00:04:11.085 Performance: 4431449 events per second 00:04:11.085 00:04:11.085 real 0m1.717s 00:04:11.085 user 0m1.164s 00:04:11.085 sys 0m0.552s 00:04:11.085 10:08:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:11.085 10:08:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.085 ************************************ 00:04:11.085 END TEST event_reactor_perf 00:04:11.085 ************************************ 00:04:11.085 10:08:16 event -- event/event.sh@49 -- # uname -s 00:04:11.085 10:08:16 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:11.085 00:04:11.085 real 0m5.384s 00:04:11.085 user 0m6.693s 00:04:11.085 sys 0m1.747s 00:04:11.085 10:08:16 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:11.085 10:08:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.085 ************************************ 00:04:11.085 END TEST event 00:04:11.085 ************************************ 00:04:11.085 10:08:16 -- spdk/autotest.sh@182 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:11.085 10:08:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:11.085 10:08:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:11.085 10:08:16 -- common/autotest_common.sh@10 -- # set +x 00:04:11.085 ************************************ 00:04:11.085 START TEST thread 00:04:11.085 ************************************ 00:04:11.085 10:08:16 thread -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:11.085 * Looking for test storage... 00:04:11.085 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:04:11.085 10:08:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:11.085 10:08:16 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:04:11.085 10:08:16 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:11.085 10:08:16 thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.085 ************************************ 00:04:11.085 START TEST thread_poller_perf 00:04:11.085 ************************************ 00:04:11.085 10:08:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:11.085 [2024-06-10 10:08:16.525903] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:11.085 [2024-06-10 10:08:16.526202] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:11.653 EAL: TSC is not safe to use in SMP mode 00:04:11.653 EAL: TSC is not invariant 00:04:11.653 [2024-06-10 10:08:16.996108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.653 [2024-06-10 10:08:17.071573] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:11.653 [2024-06-10 10:08:17.073453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.653 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:13.027 ====================================== 00:04:13.027 busy:2101121590 (cyc) 00:04:13.027 total_run_count: 6137000 00:04:13.027 tsc_hz: 2100000351 (cyc) 00:04:13.027 ====================================== 00:04:13.027 poller_cost: 342 (cyc), 162 (nsec) 00:04:13.027 00:04:13.027 real 0m1.675s 00:04:13.027 user 0m1.178s 00:04:13.027 sys 0m0.496s 00:04:13.027 10:08:18 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.027 10:08:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:13.027 ************************************ 00:04:13.027 END TEST thread_poller_perf 00:04:13.027 ************************************ 00:04:13.027 10:08:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:13.028 10:08:18 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:04:13.028 10:08:18 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.028 10:08:18 thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.028 ************************************ 00:04:13.028 START TEST thread_poller_perf 00:04:13.028 ************************************ 00:04:13.028 10:08:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:13.028 [2024-06-10 10:08:18.227695] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:13.028 [2024-06-10 10:08:18.227923] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:13.286 EAL: TSC is not safe to use in SMP mode 00:04:13.286 EAL: TSC is not invariant 00:04:13.286 [2024-06-10 10:08:18.727971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.286 [2024-06-10 10:08:18.808820] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:13.286 [2024-06-10 10:08:18.811097] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.286 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:14.657 ====================================== 00:04:14.657 busy:2101231398 (cyc) 00:04:14.657 total_run_count: 53590000 00:04:14.657 tsc_hz: 2100000351 (cyc) 00:04:14.657 ====================================== 00:04:14.657 poller_cost: 39 (cyc), 18 (nsec) 00:04:14.657 00:04:14.657 real 0m1.709s 00:04:14.657 user 0m1.173s 00:04:14.657 sys 0m0.535s 00:04:14.657 10:08:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.657 10:08:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.657 ************************************ 00:04:14.657 END TEST thread_poller_perf 00:04:14.657 ************************************ 00:04:14.657 10:08:19 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:14.657 10:08:19 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:14.657 10:08:19 thread -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.657 10:08:19 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.657 10:08:19 thread -- common/autotest_common.sh@10 -- # set +x 00:04:14.657 ************************************ 00:04:14.657 START TEST thread_spdk_lock 00:04:14.657 ************************************ 00:04:14.657 10:08:19 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:14.658 [2024-06-10 10:08:19.965473] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:14.658 [2024-06-10 10:08:19.965663] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:15.222 EAL: TSC is not safe to use in SMP mode 00:04:15.222 EAL: TSC is not invariant 00:04:15.222 [2024-06-10 10:08:20.568069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.222 [2024-06-10 10:08:20.663498] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:15.222 [2024-06-10 10:08:20.663573] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:15.222 [2024-06-10 10:08:20.666748] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.223 [2024-06-10 10:08:20.666738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.789 [2024-06-10 10:08:21.106445] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:15.789 [2024-06-10 10:08:21.106510] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:15.789 [2024-06-10 10:08:21.106519] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x3151a0 00:04:15.789 [2024-06-10 10:08:21.106871] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:15.789 [2024-06-10 10:08:21.106971] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:15.789 [2024-06-10 10:08:21.106979] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:15.789 Starting test contend 00:04:15.789 Worker Delay Wait us Hold us Total us 00:04:15.789 0 3 260923 163057 423981 00:04:15.789 1 5 164381 263665 428047 00:04:15.789 PASS test contend 00:04:15.789 Starting test hold_by_poller 00:04:15.789 PASS test hold_by_poller 00:04:15.789 Starting test hold_by_message 00:04:15.789 PASS test hold_by_message 00:04:15.789 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:15.789 100014 assertions passed 00:04:15.789 0 assertions failed 00:04:15.789 00:04:15.789 real 0m1.258s 00:04:15.789 user 0m1.052s 00:04:15.789 sys 0m0.645s 00:04:15.789 10:08:21 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.789 ************************************ 00:04:15.789 END TEST thread_spdk_lock 00:04:15.789 ************************************ 00:04:15.789 10:08:21 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:04:15.789 00:04:15.789 real 0m4.925s 00:04:15.789 user 0m3.537s 00:04:15.789 sys 0m1.888s 00:04:15.789 10:08:21 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.789 10:08:21 thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.789 ************************************ 00:04:15.789 END TEST thread 00:04:15.789 ************************************ 00:04:15.789 10:08:21 -- spdk/autotest.sh@183 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:15.789 10:08:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.789 10:08:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.789 10:08:21 -- common/autotest_common.sh@10 -- # set +x 00:04:15.789 ************************************ 00:04:15.789 START TEST accel 00:04:15.789 ************************************ 00:04:15.789 10:08:21 accel -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:16.049 * Looking for test storage... 00:04:16.049 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:04:16.049 10:08:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:16.049 10:08:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:16.049 10:08:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:16.049 10:08:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=47484 00:04:16.049 10:08:21 accel -- accel/accel.sh@63 -- # waitforlisten 47484 00:04:16.049 10:08:21 accel -- common/autotest_common.sh@830 -- # '[' -z 47484 ']' 00:04:16.049 10:08:21 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.049 10:08:21 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:16.049 10:08:21 accel -- accel/accel.sh@61 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.tJ93u3 00:04:16.049 10:08:21 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.049 10:08:21 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:16.049 10:08:21 accel -- common/autotest_common.sh@10 -- # set +x 00:04:16.049 [2024-06-10 10:08:21.432282] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:16.049 [2024-06-10 10:08:21.432422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:16.307 EAL: TSC is not safe to use in SMP mode 00:04:16.307 EAL: TSC is not invariant 00:04:16.307 [2024-06-10 10:08:21.860133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.566 [2024-06-10 10:08:21.938504] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:16.566 10:08:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:16.566 10:08:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:16.566 10:08:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:16.566 10:08:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:16.566 10:08:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:16.566 10:08:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:16.566 10:08:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:16.566 10:08:21 accel -- accel/accel.sh@41 -- # jq -r . 00:04:16.566 [2024-06-10 10:08:21.949082] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@863 -- # return 0 00:04:16.826 10:08:22 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:16.826 10:08:22 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:16.826 10:08:22 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:16.826 10:08:22 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:16.826 10:08:22 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:16.826 10:08:22 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:16.826 10:08:22 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # IFS== 00:04:16.826 10:08:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:16.826 10:08:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:16.826 10:08:22 accel -- accel/accel.sh@75 -- # killprocess 47484 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@949 -- # '[' -z 47484 ']' 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@953 -- # kill -0 47484 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@954 -- # uname 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@957 -- # ps -c -o command 47484 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@957 -- # tail -1 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:04:16.826 killing process with pid 47484 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 47484' 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@968 -- # kill 47484 00:04:16.826 10:08:22 accel -- common/autotest_common.sh@973 -- # wait 47484 00:04:17.085 10:08:22 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:17.085 10:08:22 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:17.085 10:08:22 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:04:17.085 10:08:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.085 10:08:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:17.085 10:08:22 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:04:17.085 10:08:22 accel.accel_help -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.F1cras -h 00:04:17.085 10:08:22 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:17.085 10:08:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:17.085 10:08:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:17.085 10:08:22 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:17.085 10:08:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.085 10:08:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:17.085 ************************************ 00:04:17.085 START TEST accel_missing_filename 00:04:17.085 ************************************ 00:04:17.085 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:04:17.085 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:04:17.085 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:17.086 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:04:17.086 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:17.086 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:04:17.086 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:17.086 10:08:22 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:04:17.086 10:08:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6geEjK -t 1 -w compress 00:04:17.086 [2024-06-10 10:08:22.680930] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:17.086 [2024-06-10 10:08:22.681261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:17.654 EAL: TSC is not safe to use in SMP mode 00:04:17.654 EAL: TSC is not invariant 00:04:17.654 [2024-06-10 10:08:23.121632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.654 [2024-06-10 10:08:23.201821] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:17.654 10:08:23 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:17.654 [2024-06-10 10:08:23.213073] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.654 [2024-06-10 10:08:23.215333] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:17.654 [2024-06-10 10:08:23.243856] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:17.913 A filename is required. 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:17.913 00:04:17.913 real 0m0.689s 00:04:17.913 user 0m0.209s 00:04:17.913 sys 0m0.481s 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:17.913 ************************************ 00:04:17.913 END TEST accel_missing_filename 00:04:17.913 ************************************ 00:04:17.913 10:08:23 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:17.913 10:08:23 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:17.913 10:08:23 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:04:17.913 10:08:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:17.913 10:08:23 accel -- common/autotest_common.sh@10 -- # set +x 00:04:17.913 ************************************ 00:04:17.913 START TEST accel_compress_verify 00:04:17.913 ************************************ 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:17.913 10:08:23 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:17.913 10:08:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.KqYBI4 -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:17.913 [2024-06-10 10:08:23.414281] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:17.913 [2024-06-10 10:08:23.414597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:18.481 EAL: TSC is not safe to use in SMP mode 00:04:18.481 EAL: TSC is not invariant 00:04:18.481 [2024-06-10 10:08:23.891328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.481 [2024-06-10 10:08:23.969499] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:18.481 10:08:23 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:18.481 [2024-06-10 10:08:23.981705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.481 [2024-06-10 10:08:23.983933] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:18.481 [2024-06-10 10:08:24.012532] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:18.784 00:04:18.784 Compression does not support the verify option, aborting. 00:04:18.784 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=211 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=83 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:18.785 00:04:18.785 real 0m0.743s 00:04:18.785 user 0m0.218s 00:04:18.785 sys 0m0.525s 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:18.785 ************************************ 00:04:18.785 END TEST accel_compress_verify 00:04:18.785 10:08:24 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:18.785 ************************************ 00:04:18.785 10:08:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:04:18.785 ************************************ 00:04:18.785 START TEST accel_wrong_workload 00:04:18.785 ************************************ 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:04:18.785 10:08:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.j6oA7D -t 1 -w foobar 00:04:18.785 Unsupported workload type: foobar 00:04:18.785 [2024-06-10 10:08:24.198975] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:18.785 accel_perf options: 00:04:18.785 [-h help message] 00:04:18.785 [-q queue depth per core] 00:04:18.785 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:18.785 [-T number of threads per core 00:04:18.785 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:18.785 [-t time in seconds] 00:04:18.785 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:18.785 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:18.785 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:18.785 [-l for compress/decompress workloads, name of uncompressed input file 00:04:18.785 [-S for crc32c workload, use this seed value (default 0) 00:04:18.785 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:18.785 [-f for fill workload, use this BYTE value (default 255) 00:04:18.785 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:18.785 [-y verify result if this switch is on] 00:04:18.785 [-a tasks to allocate per core (default: same value as -q)] 00:04:18.785 Can be used to spread operations across a wider range of memory. 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:18.785 00:04:18.785 real 0m0.009s 00:04:18.785 user 0m0.003s 00:04:18.785 sys 0m0.008s 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:18.785 ************************************ 00:04:18.785 END TEST accel_wrong_workload 00:04:18.785 ************************************ 00:04:18.785 10:08:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:18.785 10:08:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:04:18.785 ************************************ 00:04:18.785 START TEST accel_negative_buffers 00:04:18.785 ************************************ 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:04:18.785 10:08:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.GjiKHO -t 1 -w xor -y -x -1 00:04:18.785 -x option must be non-negative. 00:04:18.785 [2024-06-10 10:08:24.250393] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:18.785 accel_perf options: 00:04:18.785 [-h help message] 00:04:18.785 [-q queue depth per core] 00:04:18.785 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:18.785 [-T number of threads per core 00:04:18.785 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:18.785 [-t time in seconds] 00:04:18.785 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:18.785 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:18.785 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:18.785 [-l for compress/decompress workloads, name of uncompressed input file 00:04:18.785 [-S for crc32c workload, use this seed value (default 0) 00:04:18.785 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:18.785 [-f for fill workload, use this BYTE value (default 255) 00:04:18.785 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:18.785 [-y verify result if this switch is on] 00:04:18.785 [-a tasks to allocate per core (default: same value as -q)] 00:04:18.785 Can be used to spread operations across a wider range of memory. 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:18.785 00:04:18.785 real 0m0.010s 00:04:18.785 user 0m0.012s 00:04:18.785 sys 0m0.000s 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:18.785 10:08:24 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:18.785 ************************************ 00:04:18.785 END TEST accel_negative_buffers 00:04:18.785 ************************************ 00:04:18.785 10:08:24 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:18.785 10:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:04:18.785 ************************************ 00:04:18.785 START TEST accel_crc32c 00:04:18.785 ************************************ 00:04:18.785 10:08:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:18.785 10:08:24 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:18.785 10:08:24 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:18.785 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:18.785 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:18.785 10:08:24 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:18.785 10:08:24 accel.accel_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.htj6ie -t 1 -w crc32c -S 32 -y 00:04:18.785 [2024-06-10 10:08:24.296448] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:18.785 [2024-06-10 10:08:24.296667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:19.365 EAL: TSC is not safe to use in SMP mode 00:04:19.365 EAL: TSC is not invariant 00:04:19.365 [2024-06-10 10:08:24.765747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.365 [2024-06-10 10:08:24.841053] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:19.365 [2024-06-10 10:08:24.852903] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.365 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:19.366 10:08:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:20.742 10:08:25 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:20.742 00:04:20.742 real 0m1.711s 00:04:20.742 user 0m1.201s 00:04:20.742 sys 0m0.516s 00:04:20.742 10:08:25 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:20.742 10:08:25 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:20.742 ************************************ 00:04:20.742 END TEST accel_crc32c 00:04:20.742 ************************************ 00:04:20.742 10:08:26 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:20.742 10:08:26 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:04:20.742 10:08:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:20.742 10:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:04:20.742 ************************************ 00:04:20.742 START TEST accel_crc32c_C2 00:04:20.742 ************************************ 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:20.742 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.RRR4mD -t 1 -w crc32c -y -C 2 00:04:20.742 [2024-06-10 10:08:26.055728] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:20.742 [2024-06-10 10:08:26.055901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:21.001 EAL: TSC is not safe to use in SMP mode 00:04:21.001 EAL: TSC is not invariant 00:04:21.001 [2024-06-10 10:08:26.496684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.001 [2024-06-10 10:08:26.571331] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:21.001 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:21.001 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:21.001 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:21.001 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:21.001 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:21.001 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:21.002 [2024-06-10 10:08:26.583825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:21.002 10:08:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.378 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:22.378 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:22.378 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:22.378 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.378 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:22.378 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:22.379 00:04:22.379 real 0m1.684s 00:04:22.379 user 0m1.197s 00:04:22.379 sys 0m0.494s 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:22.379 10:08:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:22.379 ************************************ 00:04:22.379 END TEST accel_crc32c_C2 00:04:22.379 ************************************ 00:04:22.379 10:08:27 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:22.379 10:08:27 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:22.379 10:08:27 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:22.379 10:08:27 accel -- common/autotest_common.sh@10 -- # set +x 00:04:22.379 ************************************ 00:04:22.379 START TEST accel_copy 00:04:22.379 ************************************ 00:04:22.379 10:08:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:04:22.379 10:08:27 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:22.379 10:08:27 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:22.379 10:08:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.379 10:08:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.379 10:08:27 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:22.379 10:08:27 accel.accel_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.srK8p9 -t 1 -w copy -y 00:04:22.379 [2024-06-10 10:08:27.784420] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:22.379 [2024-06-10 10:08:27.784639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:22.946 EAL: TSC is not safe to use in SMP mode 00:04:22.946 EAL: TSC is not invariant 00:04:22.946 [2024-06-10 10:08:28.266161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.946 [2024-06-10 10:08:28.343625] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:22.946 [2024-06-10 10:08:28.350452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.946 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:22.947 10:08:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:24.323 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:24.324 10:08:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:24.324 00:04:24.324 real 0m1.734s 00:04:24.324 user 0m1.207s 00:04:24.324 sys 0m0.537s 00:04:24.324 10:08:29 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:24.324 ************************************ 00:04:24.324 10:08:29 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:24.324 END TEST accel_copy 00:04:24.324 ************************************ 00:04:24.324 10:08:29 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:24.324 10:08:29 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:04:24.324 10:08:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:24.324 10:08:29 accel -- common/autotest_common.sh@10 -- # set +x 00:04:24.324 ************************************ 00:04:24.324 START TEST accel_fill 00:04:24.324 ************************************ 00:04:24.324 10:08:29 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:24.324 10:08:29 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:24.324 10:08:29 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:24.324 10:08:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.324 10:08:29 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:24.324 10:08:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.324 10:08:29 accel.accel_fill -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.MlcQI0 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:24.324 [2024-06-10 10:08:29.558043] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:24.324 [2024-06-10 10:08:29.558213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:24.582 EAL: TSC is not safe to use in SMP mode 00:04:24.582 EAL: TSC is not invariant 00:04:24.582 [2024-06-10 10:08:30.053782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.582 [2024-06-10 10:08:30.135912] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:24.582 [2024-06-10 10:08:30.147000] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:24.582 10:08:30 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:24.583 10:08:30 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:24.583 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:24.583 10:08:30 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:25.959 10:08:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:25.959 00:04:25.959 real 0m1.743s 00:04:25.959 user 0m1.219s 00:04:25.959 sys 0m0.535s 00:04:25.959 10:08:31 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:25.959 10:08:31 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:25.959 ************************************ 00:04:25.959 END TEST accel_fill 00:04:25.959 ************************************ 00:04:25.959 10:08:31 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:25.959 10:08:31 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:25.959 10:08:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:25.959 10:08:31 accel -- common/autotest_common.sh@10 -- # set +x 00:04:25.959 ************************************ 00:04:25.959 START TEST accel_copy_crc32c 00:04:25.959 ************************************ 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:25.959 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kh8veP -t 1 -w copy_crc32c -y 00:04:25.959 [2024-06-10 10:08:31.348990] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:25.959 [2024-06-10 10:08:31.349291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:26.218 EAL: TSC is not safe to use in SMP mode 00:04:26.218 EAL: TSC is not invariant 00:04:26.478 [2024-06-10 10:08:31.820868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.478 [2024-06-10 10:08:31.895951] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:26.478 [2024-06-10 10:08:31.908020] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:26.478 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:26.479 10:08:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:27.855 00:04:27.855 real 0m1.719s 00:04:27.855 user 0m1.204s 00:04:27.855 sys 0m0.527s 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:27.855 10:08:33 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:27.855 ************************************ 00:04:27.855 END TEST accel_copy_crc32c 00:04:27.855 ************************************ 00:04:27.855 10:08:33 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:27.855 10:08:33 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:04:27.855 10:08:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:27.855 10:08:33 accel -- common/autotest_common.sh@10 -- # set +x 00:04:27.855 ************************************ 00:04:27.855 START TEST accel_copy_crc32c_C2 00:04:27.855 ************************************ 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:27.855 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.brDfzD -t 1 -w copy_crc32c -y -C 2 00:04:27.855 [2024-06-10 10:08:33.109724] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:27.855 [2024-06-10 10:08:33.109972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:28.113 EAL: TSC is not safe to use in SMP mode 00:04:28.113 EAL: TSC is not invariant 00:04:28.113 [2024-06-10 10:08:33.555296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.113 [2024-06-10 10:08:33.632517] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:28.113 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:28.113 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:28.113 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:28.114 [2024-06-10 10:08:33.641146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:28.114 10:08:33 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:29.492 00:04:29.492 real 0m1.689s 00:04:29.492 user 0m1.209s 00:04:29.492 sys 0m0.482s 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.492 10:08:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:29.492 ************************************ 00:04:29.492 END TEST accel_copy_crc32c_C2 00:04:29.492 ************************************ 00:04:29.492 10:08:34 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:29.492 10:08:34 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:29.492 10:08:34 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.492 10:08:34 accel -- common/autotest_common.sh@10 -- # set +x 00:04:29.492 ************************************ 00:04:29.492 START TEST accel_dualcast 00:04:29.492 ************************************ 00:04:29.492 10:08:34 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:04:29.492 10:08:34 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:04:29.492 10:08:34 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:04:29.492 10:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:29.492 10:08:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:29.492 10:08:34 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:29.492 10:08:34 accel.accel_dualcast -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ya7aiV -t 1 -w dualcast -y 00:04:29.492 [2024-06-10 10:08:34.845581] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:29.492 [2024-06-10 10:08:34.845879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:29.751 EAL: TSC is not safe to use in SMP mode 00:04:29.751 EAL: TSC is not invariant 00:04:29.751 [2024-06-10 10:08:35.285288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.010 [2024-06-10 10:08:35.363349] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:04:30.010 [2024-06-10 10:08:35.371494] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.010 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.011 10:08:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:04:30.947 10:08:36 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:30.947 00:04:30.947 real 0m1.683s 00:04:30.947 user 0m1.210s 00:04:30.947 sys 0m0.479s 00:04:30.947 10:08:36 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:30.947 ************************************ 00:04:30.947 END TEST accel_dualcast 00:04:30.947 ************************************ 00:04:30.947 10:08:36 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:04:31.206 10:08:36 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:31.206 10:08:36 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:31.206 10:08:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:31.206 10:08:36 accel -- common/autotest_common.sh@10 -- # set +x 00:04:31.206 ************************************ 00:04:31.206 START TEST accel_compare 00:04:31.206 ************************************ 00:04:31.206 10:08:36 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:04:31.206 10:08:36 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:04:31.206 10:08:36 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:04:31.206 10:08:36 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.206 10:08:36 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.206 10:08:36 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:31.206 10:08:36 accel.accel_compare -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.E98Vk2 -t 1 -w compare -y 00:04:31.206 [2024-06-10 10:08:36.571413] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:31.206 [2024-06-10 10:08:36.571669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:31.466 EAL: TSC is not safe to use in SMP mode 00:04:31.466 EAL: TSC is not invariant 00:04:31.466 [2024-06-10 10:08:37.035177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.725 [2024-06-10 10:08:37.113554] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:04:31.725 [2024-06-10 10:08:37.125826] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:31.725 10:08:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:32.683 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:04:32.684 10:08:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:32.684 00:04:32.684 real 0m1.715s 00:04:32.684 user 0m1.219s 00:04:32.684 sys 0m0.504s 00:04:32.684 10:08:38 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:32.684 10:08:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:04:32.684 ************************************ 00:04:32.684 END TEST accel_compare 00:04:32.684 ************************************ 00:04:32.942 10:08:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:32.942 10:08:38 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:04:32.942 10:08:38 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:32.942 10:08:38 accel -- common/autotest_common.sh@10 -- # set +x 00:04:32.942 ************************************ 00:04:32.942 START TEST accel_xor 00:04:32.942 ************************************ 00:04:32.942 10:08:38 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:04:32.942 10:08:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:32.942 10:08:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:32.942 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:32.942 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:32.942 10:08:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:32.942 10:08:38 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.9lvCPh -t 1 -w xor -y 00:04:32.942 [2024-06-10 10:08:38.326082] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:32.942 [2024-06-10 10:08:38.326292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:33.201 EAL: TSC is not safe to use in SMP mode 00:04:33.201 EAL: TSC is not invariant 00:04:33.201 [2024-06-10 10:08:38.761008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.459 [2024-06-10 10:08:38.842077] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:33.460 [2024-06-10 10:08:38.854554] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:33.460 10:08:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:34.838 10:08:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:34.839 00:04:34.839 real 0m1.688s 00:04:34.839 user 0m1.204s 00:04:34.839 sys 0m0.493s 00:04:34.839 10:08:40 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:34.839 10:08:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 ************************************ 00:04:34.839 END TEST accel_xor 00:04:34.839 ************************************ 00:04:34.839 10:08:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:04:34.839 10:08:40 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:04:34.839 10:08:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:34.839 10:08:40 accel -- common/autotest_common.sh@10 -- # set +x 00:04:34.839 ************************************ 00:04:34.839 START TEST accel_xor 00:04:34.839 ************************************ 00:04:34.839 10:08:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:04:34.839 10:08:40 accel.accel_xor -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zRafSz -t 1 -w xor -y -x 3 00:04:34.839 [2024-06-10 10:08:40.056098] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:34.839 [2024-06-10 10:08:40.056276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:35.098 EAL: TSC is not safe to use in SMP mode 00:04:35.098 EAL: TSC is not invariant 00:04:35.098 [2024-06-10 10:08:40.546037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.098 [2024-06-10 10:08:40.623911] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:35.098 [2024-06-10 10:08:40.634531] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.098 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:35.099 10:08:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:36.527 10:08:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:36.527 00:04:36.527 real 0m1.734s 00:04:36.527 user 0m1.210s 00:04:36.527 sys 0m0.535s 00:04:36.527 10:08:41 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:36.527 10:08:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:36.528 ************************************ 00:04:36.528 END TEST accel_xor 00:04:36.528 ************************************ 00:04:36.528 10:08:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:04:36.528 10:08:41 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:04:36.528 10:08:41 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:36.528 10:08:41 accel -- common/autotest_common.sh@10 -- # set +x 00:04:36.528 ************************************ 00:04:36.528 START TEST accel_dif_verify 00:04:36.528 ************************************ 00:04:36.528 10:08:41 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:04:36.528 10:08:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:04:36.528 10:08:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:04:36.528 10:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:36.528 10:08:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:36.528 10:08:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:04:36.528 10:08:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.JQQcMQ -t 1 -w dif_verify 00:04:36.528 [2024-06-10 10:08:41.834286] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:36.528 [2024-06-10 10:08:41.834488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:36.785 EAL: TSC is not safe to use in SMP mode 00:04:36.785 EAL: TSC is not invariant 00:04:36.785 [2024-06-10 10:08:42.302536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.785 [2024-06-10 10:08:42.377593] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:36.785 10:08:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:04:37.043 [2024-06-10 10:08:42.388695] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.043 10:08:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:04:37.978 10:08:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:37.978 00:04:37.978 real 0m1.709s 00:04:37.978 user 0m1.215s 00:04:37.978 sys 0m0.505s 00:04:37.978 10:08:43 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:37.978 ************************************ 00:04:37.978 END TEST accel_dif_verify 00:04:37.978 ************************************ 00:04:37.978 10:08:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:04:37.978 10:08:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:04:37.978 10:08:43 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:04:37.978 10:08:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:37.978 10:08:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:38.237 ************************************ 00:04:38.237 START TEST accel_dif_generate 00:04:38.237 ************************************ 00:04:38.237 10:08:43 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:04:38.237 10:08:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:04:38.237 10:08:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:04:38.237 10:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.237 10:08:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.237 10:08:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:04:38.237 10:08:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.umIvGv -t 1 -w dif_generate 00:04:38.237 [2024-06-10 10:08:43.589192] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:38.237 [2024-06-10 10:08:43.589487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:38.495 EAL: TSC is not safe to use in SMP mode 00:04:38.495 EAL: TSC is not invariant 00:04:38.495 [2024-06-10 10:08:44.042526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.754 [2024-06-10 10:08:44.125267] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:04:38.754 [2024-06-10 10:08:44.137862] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.754 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:38.755 10:08:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:04:39.718 10:08:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:39.718 00:04:39.718 real 0m1.706s 00:04:39.718 user 0m1.220s 00:04:39.718 sys 0m0.497s 00:04:39.718 10:08:45 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:39.718 10:08:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:04:39.718 ************************************ 00:04:39.718 END TEST accel_dif_generate 00:04:39.718 ************************************ 00:04:39.977 10:08:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:04:39.977 10:08:45 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:04:39.977 10:08:45 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:39.977 10:08:45 accel -- common/autotest_common.sh@10 -- # set +x 00:04:39.977 ************************************ 00:04:39.977 START TEST accel_dif_generate_copy 00:04:39.977 ************************************ 00:04:39.977 10:08:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:04:39.977 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:39.977 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:04:39.977 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.977 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.977 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:04:39.978 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.jVSvwH -t 1 -w dif_generate_copy 00:04:39.978 [2024-06-10 10:08:45.341256] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:39.978 [2024-06-10 10:08:45.341509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:40.236 EAL: TSC is not safe to use in SMP mode 00:04:40.236 EAL: TSC is not invariant 00:04:40.236 [2024-06-10 10:08:45.793097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.496 [2024-06-10 10:08:45.871166] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:04:40.496 [2024-06-10 10:08:45.883446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.496 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.497 10:08:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:41.434 00:04:41.434 real 0m1.698s 00:04:41.434 user 0m1.198s 00:04:41.434 sys 0m0.514s 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:41.434 10:08:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:04:41.434 ************************************ 00:04:41.434 END TEST accel_dif_generate_copy 00:04:41.434 ************************************ 00:04:41.692 10:08:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:04:41.692 10:08:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:41.692 10:08:47 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:04:41.692 10:08:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:41.692 10:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:04:41.692 ************************************ 00:04:41.692 START TEST accel_comp 00:04:41.692 ************************************ 00:04:41.692 10:08:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:41.692 10:08:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:04:41.692 10:08:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:04:41.692 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:41.692 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:41.692 10:08:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:41.692 10:08:47 accel.accel_comp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hEkmsz -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:41.692 [2024-06-10 10:08:47.083090] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:41.692 [2024-06-10 10:08:47.083385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:41.951 EAL: TSC is not safe to use in SMP mode 00:04:41.951 EAL: TSC is not invariant 00:04:41.951 [2024-06-10 10:08:47.546322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.210 [2024-06-10 10:08:47.622904] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:04:42.211 [2024-06-10 10:08:47.635991] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:42.211 10:08:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:43.587 10:08:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:04:43.588 10:08:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:43.588 00:04:43.588 real 0m1.712s 00:04:43.588 user 0m1.218s 00:04:43.588 sys 0m0.503s 00:04:43.588 10:08:48 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:43.588 ************************************ 00:04:43.588 END TEST accel_comp 00:04:43.588 ************************************ 00:04:43.588 10:08:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:04:43.588 10:08:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:43.588 10:08:48 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:04:43.588 10:08:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:43.588 10:08:48 accel -- common/autotest_common.sh@10 -- # set +x 00:04:43.588 ************************************ 00:04:43.588 START TEST accel_decomp 00:04:43.588 ************************************ 00:04:43.588 10:08:48 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:43.588 10:08:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:04:43.588 10:08:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:04:43.588 10:08:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.588 10:08:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.588 10:08:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:43.588 10:08:48 accel.accel_decomp -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FAlf9t -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:43.588 [2024-06-10 10:08:48.835666] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:43.588 [2024-06-10 10:08:48.835872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:43.846 EAL: TSC is not safe to use in SMP mode 00:04:43.846 EAL: TSC is not invariant 00:04:43.846 [2024-06-10 10:08:49.298009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.846 [2024-06-10 10:08:49.393482] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:04:43.846 [2024-06-10 10:08:49.402304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:43.846 10:08:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:45.222 10:08:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:45.222 00:04:45.222 real 0m1.721s 00:04:45.222 user 0m1.215s 00:04:45.222 sys 0m0.514s 00:04:45.222 10:08:50 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:45.222 10:08:50 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:04:45.222 ************************************ 00:04:45.222 END TEST accel_decomp 00:04:45.222 ************************************ 00:04:45.222 10:08:50 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:45.222 10:08:50 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:04:45.222 10:08:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:45.222 10:08:50 accel -- common/autotest_common.sh@10 -- # set +x 00:04:45.222 ************************************ 00:04:45.222 START TEST accel_decomp_full 00:04:45.222 ************************************ 00:04:45.222 10:08:50 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:45.222 10:08:50 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:04:45.222 10:08:50 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:04:45.222 10:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.222 10:08:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.222 10:08:50 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:45.222 10:08:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DR1GU1 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:04:45.222 [2024-06-10 10:08:50.598710] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:45.222 [2024-06-10 10:08:50.598870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:45.480 EAL: TSC is not safe to use in SMP mode 00:04:45.480 EAL: TSC is not invariant 00:04:45.480 [2024-06-10 10:08:51.058305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.739 [2024-06-10 10:08:51.135146] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:04:45.739 [2024-06-10 10:08:51.146179] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.739 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:45.740 10:08:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:47.119 10:08:52 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:47.119 00:04:47.119 real 0m1.714s 00:04:47.119 user 0m1.213s 00:04:47.119 sys 0m0.512s 00:04:47.119 10:08:52 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:47.119 10:08:52 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:04:47.119 ************************************ 00:04:47.119 END TEST accel_decomp_full 00:04:47.119 ************************************ 00:04:47.119 10:08:52 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:47.119 10:08:52 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:04:47.119 10:08:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:47.119 10:08:52 accel -- common/autotest_common.sh@10 -- # set +x 00:04:47.119 ************************************ 00:04:47.119 START TEST accel_decomp_mcore 00:04:47.119 ************************************ 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:47.119 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.AxsOjm -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:04:47.119 [2024-06-10 10:08:52.359640] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:47.119 [2024-06-10 10:08:52.359890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:47.381 EAL: TSC is not safe to use in SMP mode 00:04:47.381 EAL: TSC is not invariant 00:04:47.381 [2024-06-10 10:08:52.879557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.381 [2024-06-10 10:08:52.955066] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:47.381 [2024-06-10 10:08:52.955105] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:47.381 [2024-06-10 10:08:52.955112] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:47.381 [2024-06-10 10:08:52.955119] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:04:47.382 [2024-06-10 10:08:52.968421] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.382 [2024-06-10 10:08:52.968498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.382 [2024-06-10 10:08:52.968952] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.382 [2024-06-10 10:08:52.968823] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.382 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:47.640 10:08:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:48.574 00:04:48.574 real 0m1.774s 00:04:48.574 user 0m4.337s 00:04:48.574 sys 0m0.565s 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:48.574 10:08:54 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 ************************************ 00:04:48.574 END TEST accel_decomp_mcore 00:04:48.574 ************************************ 00:04:48.574 10:08:54 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:48.574 10:08:54 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:04:48.574 10:08:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:48.574 10:08:54 accel -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 ************************************ 00:04:48.574 START TEST accel_decomp_full_mcore 00:04:48.574 ************************************ 00:04:48.574 10:08:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:48.574 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:04:48.574 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:04:48.574 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:48.574 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:48.574 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:48.575 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uhj5MV -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:04:48.833 [2024-06-10 10:08:54.172396] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:48.833 [2024-06-10 10:08:54.172561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:49.092 EAL: TSC is not safe to use in SMP mode 00:04:49.092 EAL: TSC is not invariant 00:04:49.092 [2024-06-10 10:08:54.608514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.092 [2024-06-10 10:08:54.683518] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:49.092 [2024-06-10 10:08:54.683565] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:49.092 [2024-06-10 10:08:54.683572] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:49.092 [2024-06-10 10:08:54.683579] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:49.092 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:04:49.092 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.092 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.092 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.092 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.093 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.093 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:04:49.093 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:04:49.353 [2024-06-10 10:08:54.696778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.353 [2024-06-10 10:08:54.696683] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.353 [2024-06-10 10:08:54.696773] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.353 [2024-06-10 10:08:54.696741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.353 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:49.354 10:08:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.290 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:50.291 00:04:50.291 real 0m1.696s 00:04:50.291 user 0m4.370s 00:04:50.291 sys 0m0.471s 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:50.291 ************************************ 00:04:50.291 END TEST accel_decomp_full_mcore 00:04:50.291 ************************************ 00:04:50.291 10:08:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:04:50.550 10:08:55 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:50.550 10:08:55 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:04:50.550 10:08:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:50.550 10:08:55 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.550 ************************************ 00:04:50.550 START TEST accel_decomp_mthread 00:04:50.550 ************************************ 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:50.550 10:08:55 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.YkJPvr -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:04:50.550 [2024-06-10 10:08:55.910467] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:50.550 [2024-06-10 10:08:55.910700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:50.810 EAL: TSC is not safe to use in SMP mode 00:04:50.810 EAL: TSC is not invariant 00:04:50.810 [2024-06-10 10:08:56.366731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.070 [2024-06-10 10:08:56.462348] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:04:51.070 [2024-06-10 10:08:56.477453] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.070 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:51.071 10:08:56 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.446 00:04:52.446 real 0m1.733s 00:04:52.446 user 0m1.225s 00:04:52.446 sys 0m0.519s 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.446 10:08:57 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:04:52.446 ************************************ 00:04:52.446 END TEST accel_decomp_mthread 00:04:52.446 ************************************ 00:04:52.446 10:08:57 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:52.446 10:08:57 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:04:52.446 10:08:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.447 10:08:57 accel -- common/autotest_common.sh@10 -- # set +x 00:04:52.447 ************************************ 00:04:52.447 START TEST accel_decomp_full_mthread 00:04:52.447 ************************************ 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:52.447 10:08:57 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DC7Zvu -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:04:52.447 [2024-06-10 10:08:57.683102] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:52.447 [2024-06-10 10:08:57.683371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:52.705 EAL: TSC is not safe to use in SMP mode 00:04:52.705 EAL: TSC is not invariant 00:04:52.705 [2024-06-10 10:08:58.127277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.705 [2024-06-10 10:08:58.206512] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:04:52.705 [2024-06-10 10:08:58.219707] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:52.705 10:08:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.078 00:04:54.078 real 0m1.734s 00:04:54.078 user 0m1.257s 00:04:54.078 sys 0m0.484s 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.078 10:08:59 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:04:54.078 ************************************ 00:04:54.078 END TEST accel_decomp_full_mthread 00:04:54.078 ************************************ 00:04:54.078 10:08:59 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:04:54.078 10:08:59 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.y3gFLm 00:04:54.078 10:08:59 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:54.078 10:08:59 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.078 10:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.078 ************************************ 00:04:54.078 START TEST accel_dif_functional_tests 00:04:54.078 ************************************ 00:04:54.078 10:08:59 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.y3gFLm 00:04:54.078 [2024-06-10 10:08:59.460817] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:54.078 [2024-06-10 10:08:59.460991] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:54.644 EAL: TSC is not safe to use in SMP mode 00:04:54.644 EAL: TSC is not invariant 00:04:54.644 [2024-06-10 10:08:59.948100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:54.644 [2024-06-10 10:09:00.029805] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:54.644 [2024-06-10 10:09:00.029878] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:54.644 [2024-06-10 10:09:00.029887] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:54.644 10:09:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:04:54.644 10:09:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.644 10:09:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.644 10:09:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.644 10:09:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.644 10:09:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.644 10:09:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:54.644 10:09:00 accel -- accel/accel.sh@41 -- # jq -r . 00:04:54.644 [2024-06-10 10:09:00.041236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.644 [2024-06-10 10:09:00.041159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.644 [2024-06-10 10:09:00.041230] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.644 00:04:54.644 00:04:54.644 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.644 http://cunit.sourceforge.net/ 00:04:54.644 00:04:54.644 00:04:54.644 Suite: accel_dif 00:04:54.644 Test: verify: DIF generated, GUARD check ...passed 00:04:54.644 Test: verify: DIF generated, APPTAG check ...passed 00:04:54.644 Test: verify: DIF generated, REFTAG check ...passed 00:04:54.644 Test: verify: DIF not generated, GUARD check ...passed 00:04:54.644 Test: verify: DIF not generated, APPTAG check ...passed 00:04:54.644 Test: verify: DIF not generated, REFTAG check ...passed 00:04:54.644 Test: verify: APPTAG correct, APPTAG check ...passed 00:04:54.644 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:04:54.644 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:04:54.644 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:04:54.644 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:04:54.644 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:04:54.644 Test: verify copy: DIF generated, GUARD check ...passed 00:04:54.644 Test: verify copy: DIF generated, APPTAG check ...passed 00:04:54.644 Test: verify copy: DIF generated, REFTAG check ...passed 00:04:54.644 Test: verify copy: DIF not generated, GUARD check ...passed 00:04:54.644 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-10 10:09:00.056677] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:04:54.644 [2024-06-10 10:09:00.056747] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:04:54.644 [2024-06-10 10:09:00.056778] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:04:54.644 [2024-06-10 10:09:00.056840] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:04:54.644 [2024-06-10 10:09:00.056927] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:04:54.644 [2024-06-10 10:09:00.057027] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:04:54.644 [2024-06-10 10:09:00.057055] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:04:54.644 passed 00:04:54.644 Test: verify copy: DIF not generated, REFTAG check ...passed 00:04:54.644 Test: generate copy: DIF generated, GUARD check ...[2024-06-10 10:09:00.057083] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:04:54.644 passed 00:04:54.644 Test: generate copy: DIF generated, APTTAG check ...passed 00:04:54.644 Test: generate copy: DIF generated, REFTAG check ...passed 00:04:54.644 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:04:54.644 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:04:54.644 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:04:54.644 Test: generate copy: iovecs-len validate ...passed 00:04:54.644 Test: generate copy: buffer alignment validate ...passed 00:04:54.644 00:04:54.644 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.644 suites 1 1 n/a 0 0 00:04:54.644 tests 26 26 26 0 0 00:04:54.644 asserts 115 115 115 0 n/a 00:04:54.644 00:04:54.644 Elapsed time = 0.000 seconds 00:04:54.644 [2024-06-10 10:09:00.057227] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:04:54.644 00:04:54.644 real 0m0.773s 00:04:54.644 user 0m0.378s 00:04:54.644 sys 0m0.525s 00:04:54.644 10:09:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.644 10:09:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:04:54.644 ************************************ 00:04:54.644 END TEST accel_dif_functional_tests 00:04:54.644 ************************************ 00:04:54.903 00:04:54.903 real 0m38.971s 00:04:54.903 user 0m32.902s 00:04:54.903 sys 0m12.921s 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.903 10:09:00 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.903 ************************************ 00:04:54.903 END TEST accel 00:04:54.903 ************************************ 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:54.903 10:09:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:54.903 10:09:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:54.903 10:09:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:04:54.903 10:09:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:54.903 10:09:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:54.903 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:04:54.903 ************************************ 00:04:54.903 START TEST accel_rpc 00:04:54.903 ************************************ 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:04:54.903 * Looking for test storage... 00:04:54.903 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:04:54.903 10:09:00 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.903 10:09:00 accel_rpc -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:04:54.903 10:09:00 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=48234 00:04:54.903 10:09:00 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 48234 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 48234 ']' 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:54.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:54.903 10:09:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.162 [2024-06-10 10:09:00.501888] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:55.162 [2024-06-10 10:09:00.502063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:55.421 EAL: TSC is not safe to use in SMP mode 00:04:55.421 EAL: TSC is not invariant 00:04:55.421 [2024-06-10 10:09:00.990453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.679 [2024-06-10 10:09:01.066353] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:55.679 [2024-06-10 10:09:01.068354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:04:56.246 10:09:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:04:56.246 10:09:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:04:56.246 10:09:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:04:56.246 10:09:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:04:56.246 10:09:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 ************************************ 00:04:56.246 START TEST accel_assign_opcode 00:04:56.246 ************************************ 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 [2024-06-10 10:09:01.596651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 [2024-06-10 10:09:01.604645] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.246 software 00:04:56.246 00:04:56.246 real 0m0.070s 00:04:56.246 user 0m0.002s 00:04:56.246 sys 0m0.017s 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:56.246 10:09:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:04:56.246 ************************************ 00:04:56.246 END TEST accel_assign_opcode 00:04:56.246 ************************************ 00:04:56.246 10:09:01 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 48234 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 48234 ']' 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 48234 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@957 -- # ps -c -o command 48234 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@957 -- # tail -1 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:04:56.246 killing process with pid 48234 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 48234' 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@968 -- # kill 48234 00:04:56.246 10:09:01 accel_rpc -- common/autotest_common.sh@973 -- # wait 48234 00:04:56.504 00:04:56.504 real 0m1.647s 00:04:56.504 user 0m1.527s 00:04:56.504 sys 0m0.812s 00:04:56.504 10:09:01 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:56.504 10:09:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.504 ************************************ 00:04:56.504 END TEST accel_rpc 00:04:56.504 ************************************ 00:04:56.504 10:09:01 -- spdk/autotest.sh@185 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:04:56.504 10:09:01 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.504 10:09:01 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.504 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:04:56.504 ************************************ 00:04:56.504 START TEST app_cmdline 00:04:56.504 ************************************ 00:04:56.504 10:09:01 app_cmdline -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:04:56.762 * Looking for test storage... 00:04:56.762 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:04:56.762 10:09:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:56.762 10:09:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=48312 00:04:56.762 10:09:02 app_cmdline -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:56.762 10:09:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 48312 00:04:56.762 10:09:02 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 48312 ']' 00:04:56.762 10:09:02 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.762 10:09:02 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:56.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.762 10:09:02 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.762 10:09:02 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:56.762 10:09:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:56.762 [2024-06-10 10:09:02.206518] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:56.762 [2024-06-10 10:09:02.206690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:57.329 EAL: TSC is not safe to use in SMP mode 00:04:57.329 EAL: TSC is not invariant 00:04:57.329 [2024-06-10 10:09:02.674564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.329 [2024-06-10 10:09:02.758372] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:57.329 [2024-06-10 10:09:02.760474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:04:57.896 { 00:04:57.896 "version": "SPDK v24.09-pre git sha1 c5e2a446d", 00:04:57.896 "fields": { 00:04:57.896 "major": 24, 00:04:57.896 "minor": 9, 00:04:57.896 "patch": 0, 00:04:57.896 "suffix": "-pre", 00:04:57.896 "commit": "c5e2a446d" 00:04:57.896 } 00:04:57.896 } 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:57.896 10:09:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:04:57.896 10:09:03 app_cmdline -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:58.156 request: 00:04:58.156 { 00:04:58.156 "method": "env_dpdk_get_mem_stats", 00:04:58.156 "req_id": 1 00:04:58.156 } 00:04:58.156 Got JSON-RPC error response 00:04:58.156 response: 00:04:58.156 { 00:04:58.156 "code": -32601, 00:04:58.156 "message": "Method not found" 00:04:58.156 } 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:58.156 10:09:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 48312 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 48312 ']' 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 48312 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@957 -- # ps -c -o command 48312 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@957 -- # tail -1 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:04:58.156 killing process with pid 48312 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 48312' 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@968 -- # kill 48312 00:04:58.156 10:09:03 app_cmdline -- common/autotest_common.sh@973 -- # wait 48312 00:04:58.414 00:04:58.414 real 0m1.929s 00:04:58.414 user 0m2.197s 00:04:58.414 sys 0m0.730s 00:04:58.414 10:09:03 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:58.414 ************************************ 00:04:58.414 END TEST app_cmdline 00:04:58.414 ************************************ 00:04:58.414 10:09:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:58.414 10:09:03 -- spdk/autotest.sh@186 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:04:58.414 10:09:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:58.414 10:09:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:58.414 10:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:58.414 ************************************ 00:04:58.414 START TEST version 00:04:58.414 ************************************ 00:04:58.414 10:09:03 version -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:04:58.672 * Looking for test storage... 00:04:58.672 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:04:58.672 10:09:04 version -- app/version.sh@17 -- # get_header_version major 00:04:58.672 10:09:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # cut -f2 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # tr -d '"' 00:04:58.672 10:09:04 version -- app/version.sh@17 -- # major=24 00:04:58.672 10:09:04 version -- app/version.sh@18 -- # get_header_version minor 00:04:58.672 10:09:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # cut -f2 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # tr -d '"' 00:04:58.672 10:09:04 version -- app/version.sh@18 -- # minor=9 00:04:58.672 10:09:04 version -- app/version.sh@19 -- # get_header_version patch 00:04:58.672 10:09:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # cut -f2 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # tr -d '"' 00:04:58.672 10:09:04 version -- app/version.sh@19 -- # patch=0 00:04:58.672 10:09:04 version -- app/version.sh@20 -- # get_header_version suffix 00:04:58.672 10:09:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # cut -f2 00:04:58.672 10:09:04 version -- app/version.sh@14 -- # tr -d '"' 00:04:58.672 10:09:04 version -- app/version.sh@20 -- # suffix=-pre 00:04:58.672 10:09:04 version -- app/version.sh@22 -- # version=24.9 00:04:58.672 10:09:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:58.672 10:09:04 version -- app/version.sh@28 -- # version=24.9rc0 00:04:58.672 10:09:04 version -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:04:58.672 10:09:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:58.672 10:09:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:04:58.672 10:09:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:04:58.672 00:04:58.672 real 0m0.251s 00:04:58.672 user 0m0.141s 00:04:58.672 sys 0m0.202s 00:04:58.672 10:09:04 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:58.672 10:09:04 version -- common/autotest_common.sh@10 -- # set +x 00:04:58.672 ************************************ 00:04:58.672 END TEST version 00:04:58.672 ************************************ 00:04:58.672 10:09:04 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:04:58.672 10:09:04 -- spdk/autotest.sh@189 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:04:58.672 10:09:04 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:58.672 10:09:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:58.672 10:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:58.672 ************************************ 00:04:58.672 START TEST blockdev_general 00:04:58.672 ************************************ 00:04:58.672 10:09:04 blockdev_general -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:04:58.930 * Looking for test storage... 00:04:58.930 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:58.930 10:09:04 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=48447 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 48447 00:04:58.930 10:09:04 blockdev_general -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:04:58.930 10:09:04 blockdev_general -- common/autotest_common.sh@830 -- # '[' -z 48447 ']' 00:04:58.930 10:09:04 blockdev_general -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.930 10:09:04 blockdev_general -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:58.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.931 10:09:04 blockdev_general -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.931 10:09:04 blockdev_general -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:58.931 10:09:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:04:58.931 [2024-06-10 10:09:04.452212] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:04:58.931 [2024-06-10 10:09:04.452372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:59.495 EAL: TSC is not safe to use in SMP mode 00:04:59.495 EAL: TSC is not invariant 00:04:59.495 [2024-06-10 10:09:04.955878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.495 [2024-06-10 10:09:05.042267] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:59.495 [2024-06-10 10:09:05.044416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.094 10:09:05 blockdev_general -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:00.094 10:09:05 blockdev_general -- common/autotest_common.sh@863 -- # return 0 00:05:00.094 10:09:05 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:05:00.094 10:09:05 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:05:00.094 10:09:05 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:00.094 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.094 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.094 [2024-06-10 10:09:05.583420] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:00.094 [2024-06-10 10:09:05.583469] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:00.094 00:05:00.094 [2024-06-10 10:09:05.591410] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:00.094 [2024-06-10 10:09:05.591443] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:00.094 00:05:00.094 Malloc0 00:05:00.094 Malloc1 00:05:00.094 Malloc2 00:05:00.094 Malloc3 00:05:00.094 Malloc4 00:05:00.094 Malloc5 00:05:00.094 Malloc6 00:05:00.094 Malloc7 00:05:00.094 Malloc8 00:05:00.094 Malloc9 00:05:00.094 [2024-06-10 10:09:05.679413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:00.094 [2024-06-10 10:09:05.679455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.094 [2024-06-10 10:09:05.679479] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ba5e980 00:05:00.094 [2024-06-10 10:09:05.679486] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.094 [2024-06-10 10:09:05.679781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.094 [2024-06-10 10:09:05.679816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:00.094 TestPT 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.353 10:09:05 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:00.353 5000+0 records in 00:05:00.353 5000+0 records out 00:05:00.353 10240000 bytes transferred in 0.021178 secs (483516867 bytes/sec) 00:05:00.353 10:09:05 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.353 AIO0 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.353 10:09:05 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.353 10:09:05 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:05:00.353 10:09:05 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.353 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.353 10:09:05 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.354 10:09:05 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.354 10:09:05 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:05:00.354 10:09:05 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.354 10:09:05 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:05:00.354 10:09:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.616 10:09:05 blockdev_general -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.616 10:09:05 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:05:00.616 10:09:05 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:05:00.617 10:09:05 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "782e2d84-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "782e2d84-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ef32cf69-e445-8a53-8cf2-f62e97a0e4ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ef32cf69-e445-8a53-8cf2-f62e97a0e4ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d6b6d421-9055-e85d-b035-39ab490c6ea4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d6b6d421-9055-e85d-b035-39ab490c6ea4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d0452d2e-4510-fa56-8ac9-34578a94cfef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0452d2e-4510-fa56-8ac9-34578a94cfef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "af04ffe2-3fa8-bc5f-8a1d-7ba5a72b63f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af04ffe2-3fa8-bc5f-8a1d-7ba5a72b63f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1bbdcf94-62e8-5053-b76a-b87f2dc2e5f3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1bbdcf94-62e8-5053-b76a-b87f2dc2e5f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a652075d-3c6d-505a-b980-1f77861413f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a652075d-3c6d-505a-b980-1f77861413f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e28c06dd-351d-d850-b1e6-cf96991afe53"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e28c06dd-351d-d850-b1e6-cf96991afe53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4d082841-7337-cd51-9082-dfb7fd0ce199"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d082841-7337-cd51-9082-dfb7fd0ce199",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "385e04e6-49ab-0b58-9b90-326ccb4fc985"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "385e04e6-49ab-0b58-9b90-326ccb4fc985",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "ff367868-4f0b-075a-8d5d-d837e5576234"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ff367868-4f0b-075a-8d5d-d837e5576234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "31e37505-8003-f455-b9aa-6aad93a2af3e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "31e37505-8003-f455-b9aa-6aad93a2af3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "783ba8b7-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "783ba8b7-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783ba8b7-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "78330f46-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "783447b3-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "783cd3f1-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "783cd3f1-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783cd3f1-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7835804a-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "7836b8de-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "783e0c61-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "783e0c61-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783e0c61-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7837f159-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7839298d-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "7845fd34-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "7845fd34-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:00.617 10:09:05 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:05:00.617 10:09:05 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:05:00.617 10:09:05 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:05:00.617 10:09:05 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 48447 00:05:00.617 10:09:05 blockdev_general -- common/autotest_common.sh@949 -- # '[' -z 48447 ']' 00:05:00.617 10:09:05 blockdev_general -- common/autotest_common.sh@953 -- # kill -0 48447 00:05:00.617 10:09:05 blockdev_general -- common/autotest_common.sh@954 -- # uname 00:05:00.617 10:09:05 blockdev_general -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:05:00.617 10:09:05 blockdev_general -- common/autotest_common.sh@957 -- # ps -c -o command 48447 00:05:00.617 10:09:05 blockdev_general -- common/autotest_common.sh@957 -- # tail -1 00:05:00.617 10:09:06 blockdev_general -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:05:00.617 10:09:06 blockdev_general -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:05:00.617 killing process with pid 48447 00:05:00.617 10:09:06 blockdev_general -- common/autotest_common.sh@967 -- # echo 'killing process with pid 48447' 00:05:00.617 10:09:06 blockdev_general -- common/autotest_common.sh@968 -- # kill 48447 00:05:00.617 10:09:06 blockdev_general -- common/autotest_common.sh@973 -- # wait 48447 00:05:00.875 10:09:06 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:00.875 10:09:06 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:00.875 10:09:06 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:00.875 10:09:06 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.875 10:09:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:00.875 ************************************ 00:05:00.875 START TEST bdev_hello_world 00:05:00.875 ************************************ 00:05:00.875 10:09:06 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:00.875 [2024-06-10 10:09:06.304156] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:00.875 [2024-06-10 10:09:06.304309] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:01.441 EAL: TSC is not safe to use in SMP mode 00:05:01.441 EAL: TSC is not invariant 00:05:01.441 [2024-06-10 10:09:06.741950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.441 [2024-06-10 10:09:06.818658] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:01.441 [2024-06-10 10:09:06.820782] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.441 [2024-06-10 10:09:06.876848] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:01.441 [2024-06-10 10:09:06.876897] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:01.441 [2024-06-10 10:09:06.884824] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:01.441 [2024-06-10 10:09:06.884848] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:01.441 [2024-06-10 10:09:06.892840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:01.441 [2024-06-10 10:09:06.892867] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:01.441 [2024-06-10 10:09:06.892874] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:01.441 [2024-06-10 10:09:06.940844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:01.441 [2024-06-10 10:09:06.940893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.441 [2024-06-10 10:09:06.940901] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b9a8800 00:05:01.441 [2024-06-10 10:09:06.940908] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.441 [2024-06-10 10:09:06.941220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.441 [2024-06-10 10:09:06.941241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:01.699 [2024-06-10 10:09:07.041030] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:01.699 [2024-06-10 10:09:07.041096] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:01.699 [2024-06-10 10:09:07.041120] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:01.700 [2024-06-10 10:09:07.041147] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:01.700 [2024-06-10 10:09:07.041192] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:01.700 [2024-06-10 10:09:07.041209] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:01.700 [2024-06-10 10:09:07.041232] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:01.700 00:05:01.700 [2024-06-10 10:09:07.041252] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:01.700 00:05:01.700 real 0m0.958s 00:05:01.700 user 0m0.484s 00:05:01.700 sys 0m0.473s 00:05:01.700 10:09:07 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:01.700 ************************************ 00:05:01.700 END TEST bdev_hello_world 00:05:01.700 ************************************ 00:05:01.700 10:09:07 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:01.700 10:09:07 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:05:01.700 10:09:07 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:05:01.700 10:09:07 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:01.700 10:09:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:01.958 ************************************ 00:05:01.958 START TEST bdev_bounds 00:05:01.958 ************************************ 00:05:01.958 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:05:01.958 10:09:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=48499 00:05:01.958 Process bdevio pid: 48499 00:05:01.958 10:09:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.958 10:09:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 48499' 00:05:01.958 10:09:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 48499 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 48499 ']' 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:01.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:01.959 10:09:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:01.959 [2024-06-10 10:09:07.309389] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:01.959 [2024-06-10 10:09:07.309621] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:02.218 EAL: TSC is not safe to use in SMP mode 00:05:02.218 EAL: TSC is not invariant 00:05:02.218 [2024-06-10 10:09:07.790267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:02.477 [2024-06-10 10:09:07.865006] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:02.477 [2024-06-10 10:09:07.865062] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:02.477 [2024-06-10 10:09:07.865069] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:02.477 [2024-06-10 10:09:07.868299] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.477 [2024-06-10 10:09:07.868216] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.477 [2024-06-10 10:09:07.868300] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.477 [2024-06-10 10:09:07.924582] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:02.477 [2024-06-10 10:09:07.924647] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:02.477 [2024-06-10 10:09:07.932560] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:02.477 [2024-06-10 10:09:07.932577] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:02.477 [2024-06-10 10:09:07.940573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:02.477 [2024-06-10 10:09:07.940591] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:02.477 [2024-06-10 10:09:07.940598] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:02.477 [2024-06-10 10:09:07.988579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:02.477 [2024-06-10 10:09:07.988645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.477 [2024-06-10 10:09:07.988654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c8e6800 00:05:02.477 [2024-06-10 10:09:07.988661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.477 [2024-06-10 10:09:07.988947] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.477 [2024-06-10 10:09:07.988968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:02.736 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:02.736 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:05:02.736 10:09:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:02.736 I/O targets: 00:05:02.736 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:02.736 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:02.736 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:02.736 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:02.736 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:02.736 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:02.736 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:02.736 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:02.736 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:02.736 00:05:02.736 00:05:02.736 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.736 http://cunit.sourceforge.net/ 00:05:02.736 00:05:02.736 00:05:02.736 Suite: bdevio tests on: AIO0 00:05:02.736 Test: blockdev write read block ...passed 00:05:02.736 Test: blockdev write zeroes read block ...passed 00:05:02.736 Test: blockdev write zeroes read no split ...passed 00:05:02.736 Test: blockdev write zeroes read split ...passed 00:05:02.736 Test: blockdev write zeroes read split partial ...passed 00:05:02.736 Test: blockdev reset ...passed 00:05:02.736 Test: blockdev write read 8 blocks ...passed 00:05:02.736 Test: blockdev write read size > 128k ...passed 00:05:02.736 Test: blockdev write read invalid size ...passed 00:05:02.736 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.736 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.736 Test: blockdev write read max offset ...passed 00:05:02.736 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.736 Test: blockdev writev readv 8 blocks ...passed 00:05:02.736 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.736 Test: blockdev writev readv block ...passed 00:05:02.736 Test: blockdev writev readv size > 128k ...passed 00:05:02.736 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.736 Test: blockdev comparev and writev ...passed 00:05:02.736 Test: blockdev nvme passthru rw ...passed 00:05:02.736 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.736 Test: blockdev nvme admin passthru ...passed 00:05:02.736 Test: blockdev copy ...passed 00:05:02.736 Suite: bdevio tests on: raid1 00:05:02.736 Test: blockdev write read block ...passed 00:05:02.736 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.997 Test: blockdev writev readv 8 blocks ...passed 00:05:02.997 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.997 Test: blockdev writev readv block ...passed 00:05:02.997 Test: blockdev writev readv size > 128k ...passed 00:05:02.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.997 Test: blockdev comparev and writev ...passed 00:05:02.997 Test: blockdev nvme passthru rw ...passed 00:05:02.997 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.997 Test: blockdev nvme admin passthru ...passed 00:05:02.997 Test: blockdev copy ...passed 00:05:02.997 Suite: bdevio tests on: concat0 00:05:02.997 Test: blockdev write read block ...passed 00:05:02.997 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.997 Test: blockdev writev readv 8 blocks ...passed 00:05:02.997 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.997 Test: blockdev writev readv block ...passed 00:05:02.997 Test: blockdev writev readv size > 128k ...passed 00:05:02.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.997 Test: blockdev comparev and writev ...passed 00:05:02.997 Test: blockdev nvme passthru rw ...passed 00:05:02.997 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.997 Test: blockdev nvme admin passthru ...passed 00:05:02.997 Test: blockdev copy ...passed 00:05:02.997 Suite: bdevio tests on: raid0 00:05:02.997 Test: blockdev write read block ...passed 00:05:02.997 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.997 Test: blockdev writev readv 8 blocks ...passed 00:05:02.997 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.997 Test: blockdev writev readv block ...passed 00:05:02.997 Test: blockdev writev readv size > 128k ...passed 00:05:02.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.997 Test: blockdev comparev and writev ...passed 00:05:02.997 Test: blockdev nvme passthru rw ...passed 00:05:02.997 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.997 Test: blockdev nvme admin passthru ...passed 00:05:02.997 Test: blockdev copy ...passed 00:05:02.997 Suite: bdevio tests on: TestPT 00:05:02.997 Test: blockdev write read block ...passed 00:05:02.997 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.997 Test: blockdev writev readv 8 blocks ...passed 00:05:02.997 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.997 Test: blockdev writev readv block ...passed 00:05:02.997 Test: blockdev writev readv size > 128k ...passed 00:05:02.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.997 Test: blockdev comparev and writev ...passed 00:05:02.997 Test: blockdev nvme passthru rw ...passed 00:05:02.997 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.997 Test: blockdev nvme admin passthru ...passed 00:05:02.997 Test: blockdev copy ...passed 00:05:02.997 Suite: bdevio tests on: Malloc2p7 00:05:02.997 Test: blockdev write read block ...passed 00:05:02.997 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.997 Test: blockdev writev readv 8 blocks ...passed 00:05:02.997 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.997 Test: blockdev writev readv block ...passed 00:05:02.997 Test: blockdev writev readv size > 128k ...passed 00:05:02.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.997 Test: blockdev comparev and writev ...passed 00:05:02.997 Test: blockdev nvme passthru rw ...passed 00:05:02.997 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.997 Test: blockdev nvme admin passthru ...passed 00:05:02.997 Test: blockdev copy ...passed 00:05:02.997 Suite: bdevio tests on: Malloc2p6 00:05:02.997 Test: blockdev write read block ...passed 00:05:02.997 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.997 Test: blockdev writev readv 8 blocks ...passed 00:05:02.997 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.997 Test: blockdev writev readv block ...passed 00:05:02.997 Test: blockdev writev readv size > 128k ...passed 00:05:02.997 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.997 Test: blockdev comparev and writev ...passed 00:05:02.997 Test: blockdev nvme passthru rw ...passed 00:05:02.997 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.997 Test: blockdev nvme admin passthru ...passed 00:05:02.997 Test: blockdev copy ...passed 00:05:02.997 Suite: bdevio tests on: Malloc2p5 00:05:02.997 Test: blockdev write read block ...passed 00:05:02.997 Test: blockdev write zeroes read block ...passed 00:05:02.997 Test: blockdev write zeroes read no split ...passed 00:05:02.997 Test: blockdev write zeroes read split ...passed 00:05:02.997 Test: blockdev write zeroes read split partial ...passed 00:05:02.997 Test: blockdev reset ...passed 00:05:02.997 Test: blockdev write read 8 blocks ...passed 00:05:02.997 Test: blockdev write read size > 128k ...passed 00:05:02.997 Test: blockdev write read invalid size ...passed 00:05:02.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.997 Test: blockdev write read max offset ...passed 00:05:02.997 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.998 Test: blockdev copy ...passed 00:05:02.998 Suite: bdevio tests on: Malloc2p4 00:05:02.998 Test: blockdev write read block ...passed 00:05:02.998 Test: blockdev write zeroes read block ...passed 00:05:02.998 Test: blockdev write zeroes read no split ...passed 00:05:02.998 Test: blockdev write zeroes read split ...passed 00:05:02.998 Test: blockdev write zeroes read split partial ...passed 00:05:02.998 Test: blockdev reset ...passed 00:05:02.998 Test: blockdev write read 8 blocks ...passed 00:05:02.998 Test: blockdev write read size > 128k ...passed 00:05:02.998 Test: blockdev write read invalid size ...passed 00:05:02.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.998 Test: blockdev write read max offset ...passed 00:05:02.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.998 Test: blockdev copy ...passed 00:05:02.998 Suite: bdevio tests on: Malloc2p3 00:05:02.998 Test: blockdev write read block ...passed 00:05:02.998 Test: blockdev write zeroes read block ...passed 00:05:02.998 Test: blockdev write zeroes read no split ...passed 00:05:02.998 Test: blockdev write zeroes read split ...passed 00:05:02.998 Test: blockdev write zeroes read split partial ...passed 00:05:02.998 Test: blockdev reset ...passed 00:05:02.998 Test: blockdev write read 8 blocks ...passed 00:05:02.998 Test: blockdev write read size > 128k ...passed 00:05:02.998 Test: blockdev write read invalid size ...passed 00:05:02.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.998 Test: blockdev write read max offset ...passed 00:05:02.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.998 Test: blockdev copy ...passed 00:05:02.998 Suite: bdevio tests on: Malloc2p2 00:05:02.998 Test: blockdev write read block ...passed 00:05:02.998 Test: blockdev write zeroes read block ...passed 00:05:02.998 Test: blockdev write zeroes read no split ...passed 00:05:02.998 Test: blockdev write zeroes read split ...passed 00:05:02.998 Test: blockdev write zeroes read split partial ...passed 00:05:02.998 Test: blockdev reset ...passed 00:05:02.998 Test: blockdev write read 8 blocks ...passed 00:05:02.998 Test: blockdev write read size > 128k ...passed 00:05:02.998 Test: blockdev write read invalid size ...passed 00:05:02.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.998 Test: blockdev write read max offset ...passed 00:05:02.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.998 Test: blockdev copy ...passed 00:05:02.998 Suite: bdevio tests on: Malloc2p1 00:05:02.998 Test: blockdev write read block ...passed 00:05:02.998 Test: blockdev write zeroes read block ...passed 00:05:02.998 Test: blockdev write zeroes read no split ...passed 00:05:02.998 Test: blockdev write zeroes read split ...passed 00:05:02.998 Test: blockdev write zeroes read split partial ...passed 00:05:02.998 Test: blockdev reset ...passed 00:05:02.998 Test: blockdev write read 8 blocks ...passed 00:05:02.998 Test: blockdev write read size > 128k ...passed 00:05:02.998 Test: blockdev write read invalid size ...passed 00:05:02.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.998 Test: blockdev write read max offset ...passed 00:05:02.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.998 Test: blockdev copy ...passed 00:05:02.998 Suite: bdevio tests on: Malloc2p0 00:05:02.998 Test: blockdev write read block ...passed 00:05:02.998 Test: blockdev write zeroes read block ...passed 00:05:02.998 Test: blockdev write zeroes read no split ...passed 00:05:02.998 Test: blockdev write zeroes read split ...passed 00:05:02.998 Test: blockdev write zeroes read split partial ...passed 00:05:02.998 Test: blockdev reset ...passed 00:05:02.998 Test: blockdev write read 8 blocks ...passed 00:05:02.998 Test: blockdev write read size > 128k ...passed 00:05:02.998 Test: blockdev write read invalid size ...passed 00:05:02.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.998 Test: blockdev write read max offset ...passed 00:05:02.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.998 Test: blockdev copy ...passed 00:05:02.998 Suite: bdevio tests on: Malloc1p1 00:05:02.998 Test: blockdev write read block ...passed 00:05:02.998 Test: blockdev write zeroes read block ...passed 00:05:02.998 Test: blockdev write zeroes read no split ...passed 00:05:02.998 Test: blockdev write zeroes read split ...passed 00:05:02.998 Test: blockdev write zeroes read split partial ...passed 00:05:02.998 Test: blockdev reset ...passed 00:05:02.998 Test: blockdev write read 8 blocks ...passed 00:05:02.998 Test: blockdev write read size > 128k ...passed 00:05:02.998 Test: blockdev write read invalid size ...passed 00:05:02.998 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.998 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.998 Test: blockdev write read max offset ...passed 00:05:02.998 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.998 Test: blockdev writev readv 8 blocks ...passed 00:05:02.998 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.998 Test: blockdev writev readv block ...passed 00:05:02.998 Test: blockdev writev readv size > 128k ...passed 00:05:02.998 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.998 Test: blockdev comparev and writev ...passed 00:05:02.998 Test: blockdev nvme passthru rw ...passed 00:05:02.998 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.998 Test: blockdev nvme admin passthru ...passed 00:05:02.999 Test: blockdev copy ...passed 00:05:02.999 Suite: bdevio tests on: Malloc1p0 00:05:02.999 Test: blockdev write read block ...passed 00:05:02.999 Test: blockdev write zeroes read block ...passed 00:05:02.999 Test: blockdev write zeroes read no split ...passed 00:05:02.999 Test: blockdev write zeroes read split ...passed 00:05:02.999 Test: blockdev write zeroes read split partial ...passed 00:05:02.999 Test: blockdev reset ...passed 00:05:02.999 Test: blockdev write read 8 blocks ...passed 00:05:02.999 Test: blockdev write read size > 128k ...passed 00:05:02.999 Test: blockdev write read invalid size ...passed 00:05:02.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.999 Test: blockdev write read max offset ...passed 00:05:02.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.999 Test: blockdev writev readv 8 blocks ...passed 00:05:02.999 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.999 Test: blockdev writev readv block ...passed 00:05:02.999 Test: blockdev writev readv size > 128k ...passed 00:05:02.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.999 Test: blockdev comparev and writev ...passed 00:05:02.999 Test: blockdev nvme passthru rw ...passed 00:05:02.999 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.999 Test: blockdev nvme admin passthru ...passed 00:05:02.999 Test: blockdev copy ...passed 00:05:02.999 Suite: bdevio tests on: Malloc0 00:05:02.999 Test: blockdev write read block ...passed 00:05:02.999 Test: blockdev write zeroes read block ...passed 00:05:02.999 Test: blockdev write zeroes read no split ...passed 00:05:02.999 Test: blockdev write zeroes read split ...passed 00:05:02.999 Test: blockdev write zeroes read split partial ...passed 00:05:02.999 Test: blockdev reset ...passed 00:05:02.999 Test: blockdev write read 8 blocks ...passed 00:05:02.999 Test: blockdev write read size > 128k ...passed 00:05:02.999 Test: blockdev write read invalid size ...passed 00:05:02.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:02.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:02.999 Test: blockdev write read max offset ...passed 00:05:02.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:02.999 Test: blockdev writev readv 8 blocks ...passed 00:05:02.999 Test: blockdev writev readv 30 x 1block ...passed 00:05:02.999 Test: blockdev writev readv block ...passed 00:05:02.999 Test: blockdev writev readv size > 128k ...passed 00:05:02.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:02.999 Test: blockdev comparev and writev ...passed 00:05:02.999 Test: blockdev nvme passthru rw ...passed 00:05:02.999 Test: blockdev nvme passthru vendor specific ...passed 00:05:02.999 Test: blockdev nvme admin passthru ...passed 00:05:02.999 Test: blockdev copy ...passed 00:05:02.999 00:05:02.999 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.999 suites 16 16 n/a 0 0 00:05:02.999 tests 368 368 368 0 0 00:05:02.999 asserts 2224 2224 2224 0 n/a 00:05:02.999 00:05:02.999 Elapsed time = 0.445 seconds 00:05:02.999 0 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 48499 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 48499 ']' 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 48499 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@957 -- # tail -1 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@957 -- # ps -c -o command 48499 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@957 -- # process_name=bdevio 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' bdevio = sudo ']' 00:05:02.999 killing process with pid 48499 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 48499' 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # kill 48499 00:05:02.999 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # wait 48499 00:05:03.258 10:09:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:05:03.258 00:05:03.258 real 0m1.445s 00:05:03.258 user 0m2.672s 00:05:03.258 sys 0m0.623s 00:05:03.258 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:03.258 ************************************ 00:05:03.258 END TEST bdev_bounds 00:05:03.258 ************************************ 00:05:03.258 10:09:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:03.258 10:09:08 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:03.258 10:09:08 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:03.258 10:09:08 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.258 10:09:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:03.258 ************************************ 00:05:03.258 START TEST bdev_nbd 00:05:03.258 ************************************ 00:05:03.258 10:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:03.258 10:09:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:05:03.258 10:09:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:05:03.258 10:09:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:05:03.258 00:05:03.258 real 0m0.004s 00:05:03.258 user 0m0.005s 00:05:03.258 sys 0m0.001s 00:05:03.258 10:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:03.258 10:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:03.258 ************************************ 00:05:03.258 END TEST bdev_nbd 00:05:03.258 ************************************ 00:05:03.258 10:09:08 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:05:03.258 10:09:08 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:05:03.258 10:09:08 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:05:03.258 10:09:08 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:05:03.258 10:09:08 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:05:03.258 10:09:08 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:03.258 10:09:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:03.258 ************************************ 00:05:03.258 START TEST bdev_fio 00:05:03.258 ************************************ 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # fio_test_suite '' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:05:03.258 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=verify 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type=AIO 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z verify ']' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1298 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # '[' verify == verify ']' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # cat 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1322 -- # '[' AIO == AIO ']' 00:05:03.258 10:09:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # /usr/src/fio/fio --version 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # echo serialize_overlap=1 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:04.196 10:09:09 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:04.196 ************************************ 00:05:04.196 START TEST bdev_fio_rw_verify 00:05:04.196 ************************************ 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1355 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1338 -- # local sanitizers 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # shift 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local asan_lib= 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libasan 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # asan_lib= 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:04.197 10:09:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:04.197 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:04.197 fio-3.35 00:05:04.197 Starting 16 threads 00:05:04.766 EAL: TSC is not safe to use in SMP mode 00:05:04.766 EAL: TSC is not invariant 00:05:16.965 00:05:16.965 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102695: Mon Jun 10 10:09:20 2024 00:05:16.965 read: IOPS=254k, BW=992MiB/s (1040MB/s)(9925MiB/10004msec) 00:05:16.965 slat (nsec): min=228, max=94548k, avg=3595.35, stdev=380229.83 00:05:16.965 clat (nsec): min=610, max=126511k, avg=44597.31, stdev=1204194.49 00:05:16.965 lat (nsec): min=1563, max=126511k, avg=48192.66, stdev=1262960.25 00:05:16.965 clat percentiles (usec): 00:05:16.965 | 50.000th=[ 9], 99.000th=[ 766], 99.900th=[ 857], 99.990th=[76022], 00:05:16.965 | 99.999th=[94897] 00:05:16.965 write: IOPS=422k, BW=1647MiB/s (1727MB/s)(15.9GiB/9861msec); 0 zone resets 00:05:16.965 slat (nsec): min=494, max=1104.8M, avg=20016.20, stdev=986633.47 00:05:16.965 clat (nsec): min=608, max=2100.7M, avg=102039.81, stdev=3641865.17 00:05:16.965 lat (usec): min=10, max=2100.7k, avg=122.06, stdev=3773.06 00:05:16.965 clat percentiles (usec): 00:05:16.965 | 50.000th=[ 46], 99.000th=[ 750], 99.900th=[ 1729], 00:05:16.965 | 99.990th=[ 94897], 99.999th=[189793] 00:05:16.965 bw ( MiB/s): min= 683, max= 2689, per=99.94%, avg=1646.33, stdev=39.91, samples=294 00:05:16.965 iops : min=174912, max=688573, avg=421456.07, stdev=10216.37, samples=294 00:05:16.965 lat (nsec) : 750=0.01%, 1000=0.01% 00:05:16.965 lat (usec) : 2=0.09%, 4=12.80%, 10=18.96%, 20=19.60%, 50=18.93% 00:05:16.965 lat (usec) : 100=26.45%, 250=1.43%, 500=0.09%, 750=0.50%, 1000=1.02% 00:05:16.965 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.02% 00:05:16.965 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:05:16.965 lat (msec) : >=2000=0.01% 00:05:16.965 cpu : usr=57.18%, sys=2.56%, ctx=649279, majf=0, minf=641 00:05:16.965 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:16.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:16.965 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:16.965 issued rwts: total=2540805,4158323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:16.965 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:16.965 00:05:16.965 Run status group 0 (all jobs): 00:05:16.965 READ: bw=992MiB/s (1040MB/s), 992MiB/s-992MiB/s (1040MB/s-1040MB/s), io=9925MiB (10.4GB), run=10004-10004msec 00:05:16.965 WRITE: bw=1647MiB/s (1727MB/s), 1647MiB/s-1647MiB/s (1727MB/s-1727MB/s), io=15.9GiB (17.0GB), run=9861-9861msec 00:05:16.965 00:05:16.965 real 0m12.298s 00:05:16.965 user 1m35.559s 00:05:16.965 sys 0m6.334s 00:05:16.965 10:09:21 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:16.965 10:09:21 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:05:16.965 ************************************ 00:05:16.965 END TEST bdev_fio_rw_verify 00:05:16.965 ************************************ 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local workload=trim 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local bdev_type= 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local env_context= 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local fio_dir=/usr/src/fio 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -z trim ']' 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1294 -- # '[' -n '' ']' 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1298 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1300 -- # cat 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1312 -- # '[' trim == verify ']' 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' trim == trim ']' 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # echo rw=trimwrite 00:05:16.965 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:16.966 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "782e2d84-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "782e2d84-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ef32cf69-e445-8a53-8cf2-f62e97a0e4ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ef32cf69-e445-8a53-8cf2-f62e97a0e4ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d6b6d421-9055-e85d-b035-39ab490c6ea4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d6b6d421-9055-e85d-b035-39ab490c6ea4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d0452d2e-4510-fa56-8ac9-34578a94cfef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0452d2e-4510-fa56-8ac9-34578a94cfef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "af04ffe2-3fa8-bc5f-8a1d-7ba5a72b63f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af04ffe2-3fa8-bc5f-8a1d-7ba5a72b63f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1bbdcf94-62e8-5053-b76a-b87f2dc2e5f3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1bbdcf94-62e8-5053-b76a-b87f2dc2e5f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a652075d-3c6d-505a-b980-1f77861413f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a652075d-3c6d-505a-b980-1f77861413f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e28c06dd-351d-d850-b1e6-cf96991afe53"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e28c06dd-351d-d850-b1e6-cf96991afe53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4d082841-7337-cd51-9082-dfb7fd0ce199"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d082841-7337-cd51-9082-dfb7fd0ce199",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "385e04e6-49ab-0b58-9b90-326ccb4fc985"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "385e04e6-49ab-0b58-9b90-326ccb4fc985",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "ff367868-4f0b-075a-8d5d-d837e5576234"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ff367868-4f0b-075a-8d5d-d837e5576234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "31e37505-8003-f455-b9aa-6aad93a2af3e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "31e37505-8003-f455-b9aa-6aad93a2af3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "783ba8b7-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "783ba8b7-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783ba8b7-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "78330f46-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "783447b3-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "783cd3f1-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "783cd3f1-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783cd3f1-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7835804a-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "7836b8de-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "783e0c61-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "783e0c61-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783e0c61-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7837f159-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7839298d-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "7845fd34-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "7845fd34-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:16.966 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:05:16.966 Malloc1p0 00:05:16.966 Malloc1p1 00:05:16.966 Malloc2p0 00:05:16.966 Malloc2p1 00:05:16.966 Malloc2p2 00:05:16.966 Malloc2p3 00:05:16.966 Malloc2p4 00:05:16.966 Malloc2p5 00:05:16.966 Malloc2p6 00:05:16.966 Malloc2p7 00:05:16.966 TestPT 00:05:16.966 raid0 00:05:16.966 concat0 ]] 00:05:16.966 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "782e2d84-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "782e2d84-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "ef32cf69-e445-8a53-8cf2-f62e97a0e4ec"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ef32cf69-e445-8a53-8cf2-f62e97a0e4ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "d6b6d421-9055-e85d-b035-39ab490c6ea4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "d6b6d421-9055-e85d-b035-39ab490c6ea4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "d0452d2e-4510-fa56-8ac9-34578a94cfef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d0452d2e-4510-fa56-8ac9-34578a94cfef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "af04ffe2-3fa8-bc5f-8a1d-7ba5a72b63f6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "af04ffe2-3fa8-bc5f-8a1d-7ba5a72b63f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1bbdcf94-62e8-5053-b76a-b87f2dc2e5f3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1bbdcf94-62e8-5053-b76a-b87f2dc2e5f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "a652075d-3c6d-505a-b980-1f77861413f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a652075d-3c6d-505a-b980-1f77861413f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e28c06dd-351d-d850-b1e6-cf96991afe53"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e28c06dd-351d-d850-b1e6-cf96991afe53",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "4d082841-7337-cd51-9082-dfb7fd0ce199"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4d082841-7337-cd51-9082-dfb7fd0ce199",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "385e04e6-49ab-0b58-9b90-326ccb4fc985"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "385e04e6-49ab-0b58-9b90-326ccb4fc985",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "ff367868-4f0b-075a-8d5d-d837e5576234"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ff367868-4f0b-075a-8d5d-d837e5576234",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "31e37505-8003-f455-b9aa-6aad93a2af3e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "31e37505-8003-f455-b9aa-6aad93a2af3e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "783ba8b7-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "783ba8b7-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783ba8b7-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "78330f46-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "783447b3-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "783cd3f1-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "783cd3f1-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783cd3f1-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7835804a-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "7836b8de-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "783e0c61-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "783e0c61-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "783e0c61-2711-11ef-b084-113036b5c18d",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7837f159-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7839298d-2711-11ef-b084-113036b5c18d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "7845fd34-2711-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "7845fd34-2711-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:16.968 10:09:21 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:16.968 ************************************ 00:05:16.968 START TEST bdev_fio_trim 00:05:16.968 ************************************ 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1355 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1338 -- # local sanitizers 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # shift 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # local asan_lib= 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libasan 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # asan_lib= 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:16.968 10:09:22 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:16.968 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:16.968 fio-3.35 00:05:16.968 Starting 14 threads 00:05:17.227 EAL: TSC is not safe to use in SMP mode 00:05:17.227 EAL: TSC is not invariant 00:05:29.449 00:05:29.449 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102714: Mon Jun 10 10:09:33 2024 00:05:29.449 write: IOPS=2268k, BW=8861MiB/s (9291MB/s)(86.5GiB/10001msec); 0 zone resets 00:05:29.449 slat (nsec): min=212, max=811020k, avg=1370.89, stdev=279516.43 00:05:29.449 clat (nsec): min=1132, max=1196.9M, avg=17072.39, stdev=1251116.64 00:05:29.449 lat (nsec): min=1619, max=1196.9M, avg=18443.28, stdev=1281958.72 00:05:29.449 clat percentiles (usec): 00:05:29.449 | 50.000th=[ 7], 99.000th=[ 22], 99.900th=[ 955], 99.990th=[ 971], 00:05:29.449 | 99.999th=[94897] 00:05:29.449 bw ( MiB/s): min= 3002, max=14625, per=100.00%, avg=9340.84, stdev=263.20, samples=258 00:05:29.449 iops : min=768638, max=3744117, avg=2391251.05, stdev=67378.37, samples=258 00:05:29.449 trim: IOPS=2268k, BW=8861MiB/s (9291MB/s)(86.5GiB/10001msec); 0 zone resets 00:05:29.449 slat (nsec): min=483, max=1094.5M, avg=1890.15, stdev=383652.64 00:05:29.449 clat (nsec): min=316, max=1196.9M, avg=12081.81, stdev=977470.17 00:05:29.449 lat (nsec): min=1480, max=1196.9M, avg=13971.96, stdev=1065994.02 00:05:29.449 clat percentiles (usec): 00:05:29.449 | 50.000th=[ 8], 99.000th=[ 24], 99.900th=[ 31], 99.990th=[ 50], 00:05:29.449 | 99.999th=[94897] 00:05:29.449 bw ( MiB/s): min= 3002, max=14625, per=100.00%, avg=9340.85, stdev=263.20, samples=258 00:05:29.449 iops : min=768638, max=3744115, avg=2391252.58, stdev=67378.35, samples=258 00:05:29.449 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:05:29.449 lat (usec) : 2=0.10%, 4=19.65%, 10=57.18%, 20=21.10%, 50=1.76% 00:05:29.449 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.17% 00:05:29.449 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:29.449 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:05:29.449 lat (msec) : 2000=0.01% 00:05:29.449 cpu : usr=62.87%, sys=4.91%, ctx=1015071, majf=0, minf=0 00:05:29.449 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:29.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:29.449 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:29.449 issued rwts: total=0,22685176,22685181,0 short=0,0,0,0 dropped=0,0,0,0 00:05:29.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:29.449 00:05:29.449 Run status group 0 (all jobs): 00:05:29.449 WRITE: bw=8861MiB/s (9291MB/s), 8861MiB/s-8861MiB/s (9291MB/s-9291MB/s), io=86.5GiB (92.9GB), run=10001-10001msec 00:05:29.449 TRIM: bw=8861MiB/s (9291MB/s), 8861MiB/s-8861MiB/s (9291MB/s-9291MB/s), io=86.5GiB (92.9GB), run=10001-10001msec 00:05:29.449 00:05:29.449 real 0m12.466s 00:05:29.449 user 1m33.818s 00:05:29.449 sys 0m9.922s 00:05:29.449 10:09:34 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:29.449 ************************************ 00:05:29.449 END TEST bdev_fio_trim 00:05:29.449 ************************************ 00:05:29.449 10:09:34 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:05:29.449 10:09:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:05:29.449 10:09:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:29.449 10:09:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:05:29.449 /usr/home/vagrant/spdk_repo/spdk 00:05:29.449 10:09:34 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:05:29.449 00:05:29.449 real 0m25.676s 00:05:29.449 user 3m9.678s 00:05:29.449 sys 0m16.825s 00:05:29.449 10:09:34 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:29.449 ************************************ 00:05:29.449 END TEST bdev_fio 00:05:29.449 ************************************ 00:05:29.449 10:09:34 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:29.449 10:09:34 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:29.449 10:09:34 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:29.449 10:09:34 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:05:29.449 10:09:34 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:29.449 10:09:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:29.449 ************************************ 00:05:29.449 START TEST bdev_verify 00:05:29.449 ************************************ 00:05:29.449 10:09:34 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:29.449 [2024-06-10 10:09:34.556809] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:29.449 [2024-06-10 10:09:34.556975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:29.449 EAL: TSC is not safe to use in SMP mode 00:05:29.449 EAL: TSC is not invariant 00:05:29.449 [2024-06-10 10:09:34.989144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.707 [2024-06-10 10:09:35.082727] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:29.707 [2024-06-10 10:09:35.082800] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:29.707 [2024-06-10 10:09:35.085923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.707 [2024-06-10 10:09:35.085915] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.707 [2024-06-10 10:09:35.149337] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:29.707 [2024-06-10 10:09:35.149389] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:29.707 [2024-06-10 10:09:35.157318] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:29.707 [2024-06-10 10:09:35.157356] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:29.707 [2024-06-10 10:09:35.165337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:29.707 [2024-06-10 10:09:35.165371] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:29.707 [2024-06-10 10:09:35.165382] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:29.707 [2024-06-10 10:09:35.213356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:29.707 [2024-06-10 10:09:35.213425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.707 [2024-06-10 10:09:35.213438] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be30800 00:05:29.707 [2024-06-10 10:09:35.213449] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.707 [2024-06-10 10:09:35.213854] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.707 [2024-06-10 10:09:35.213881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:29.994 Running I/O for 5 seconds... 00:05:35.273 00:05:35.273 Latency(us) 00:05:35.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:35.273 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x1000 00:05:35.273 Malloc0 : 5.02 6404.21 25.02 0.00 0.00 19976.92 57.54 44938.96 00:05:35.273 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x1000 length 0x1000 00:05:35.273 Malloc0 : 5.03 103.21 0.40 0.00 0.00 1239234.28 143.36 1589840.68 00:05:35.273 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x800 00:05:35.273 Malloc1p0 : 5.02 6628.38 25.89 0.00 0.00 19300.08 239.91 20597.03 00:05:35.273 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x800 length 0x800 00:05:35.273 Malloc1p0 : 5.02 7186.79 28.07 0.00 0.00 17800.12 233.08 16852.11 00:05:35.273 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x800 00:05:35.273 Malloc1p1 : 5.02 6628.05 25.89 0.00 0.00 19298.22 230.16 19972.87 00:05:35.273 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x800 length 0x800 00:05:35.273 Malloc1p1 : 5.02 7184.79 28.07 0.00 0.00 17802.17 228.21 16477.62 00:05:35.273 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p0 : 5.02 6626.13 25.88 0.00 0.00 19299.74 222.35 19223.89 00:05:35.273 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p0 : 5.02 7184.36 28.06 0.00 0.00 17800.51 224.30 15915.88 00:05:35.273 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p1 : 5.02 6625.73 25.88 0.00 0.00 19298.63 220.40 18474.91 00:05:35.273 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p1 : 5.02 7184.02 28.06 0.00 0.00 17798.62 224.30 15416.56 00:05:35.273 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p2 : 5.02 6625.42 25.88 0.00 0.00 19296.95 215.53 17601.09 00:05:35.273 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p2 : 5.02 7183.67 28.06 0.00 0.00 17796.78 221.38 13731.35 00:05:35.273 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p3 : 5.02 6625.11 25.88 0.00 0.00 19294.81 231.13 15354.15 00:05:35.273 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p3 : 5.02 7183.33 28.06 0.00 0.00 17794.88 220.40 12919.95 00:05:35.273 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p4 : 5.02 6624.79 25.88 0.00 0.00 19292.41 220.40 14729.99 00:05:35.273 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p4 : 5.03 7182.97 28.06 0.00 0.00 17793.11 229.18 13044.78 00:05:35.273 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p5 : 5.02 6624.48 25.88 0.00 0.00 19290.48 225.28 14917.24 00:05:35.273 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p5 : 5.03 7182.63 28.06 0.00 0.00 17791.26 218.45 13544.10 00:05:35.273 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p6 : 5.02 6624.08 25.88 0.00 0.00 19288.39 224.30 15728.64 00:05:35.273 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p6 : 5.03 7180.34 28.05 0.00 0.00 17793.04 226.26 14168.26 00:05:35.273 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x200 00:05:35.273 Malloc2p7 : 5.02 6623.77 25.87 0.00 0.00 19286.52 226.26 16602.45 00:05:35.273 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x200 length 0x200 00:05:35.273 Malloc2p7 : 5.03 7179.92 28.05 0.00 0.00 17792.25 217.48 14667.58 00:05:35.273 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x1000 00:05:35.273 TestPT : 5.02 6538.58 25.54 0.00 0.00 19525.54 1061.06 19972.87 00:05:35.273 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x1000 length 0x1000 00:05:35.273 TestPT : 5.03 4565.71 17.83 0.00 0.00 27949.93 1014.25 78892.85 00:05:35.273 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x2000 00:05:35.273 raid0 : 5.02 6623.25 25.87 0.00 0.00 19279.70 245.76 17476.26 00:05:35.273 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x2000 length 0x2000 00:05:35.273 raid0 : 5.03 7179.40 28.04 0.00 0.00 17786.31 253.56 15229.32 00:05:35.273 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x2000 00:05:35.273 concat0 : 5.02 6622.93 25.87 0.00 0.00 19277.35 249.66 18350.08 00:05:35.273 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x2000 length 0x2000 00:05:35.273 concat0 : 5.03 7179.05 28.04 0.00 0.00 17784.30 235.03 15791.05 00:05:35.273 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x1000 00:05:35.273 raid1 : 5.03 6622.59 25.87 0.00 0.00 19274.91 278.92 19473.55 00:05:35.273 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x1000 length 0x1000 00:05:35.273 raid1 : 5.03 7178.64 28.04 0.00 0.00 17781.94 271.12 16477.62 00:05:35.273 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x0 length 0x4e2 00:05:35.273 AIO0 : 5.05 1047.57 4.09 0.00 0.00 121326.24 1583.79 178757.21 00:05:35.273 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:35.273 Verification LBA range: start 0x4e2 length 0x4e2 00:05:35.273 AIO0 : 5.05 1063.54 4.15 0.00 0.00 119525.45 1583.79 172765.35 00:05:35.273 =================================================================================================================== 00:05:35.273 Total : 199217.43 778.19 0.00 0.00 20527.25 57.54 1589840.68 00:05:35.273 00:05:35.273 real 0m6.062s 00:05:35.273 user 0m9.864s 00:05:35.273 sys 0m0.542s 00:05:35.273 10:09:40 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:35.273 10:09:40 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:35.273 ************************************ 00:05:35.273 END TEST bdev_verify 00:05:35.273 ************************************ 00:05:35.273 10:09:40 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:35.273 10:09:40 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:05:35.273 10:09:40 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:35.273 10:09:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:35.273 ************************************ 00:05:35.273 START TEST bdev_verify_big_io 00:05:35.273 ************************************ 00:05:35.273 10:09:40 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:35.273 [2024-06-10 10:09:40.668271] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:35.273 [2024-06-10 10:09:40.668445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:35.533 EAL: TSC is not safe to use in SMP mode 00:05:35.533 EAL: TSC is not invariant 00:05:35.533 [2024-06-10 10:09:41.103965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.791 [2024-06-10 10:09:41.183714] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:35.791 [2024-06-10 10:09:41.183774] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:35.791 [2024-06-10 10:09:41.186730] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.791 [2024-06-10 10:09:41.186726] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.791 [2024-06-10 10:09:41.243621] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:35.791 [2024-06-10 10:09:41.243678] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:35.791 [2024-06-10 10:09:41.251607] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:35.791 [2024-06-10 10:09:41.251634] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:35.791 [2024-06-10 10:09:41.259625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:35.791 [2024-06-10 10:09:41.259663] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:35.791 [2024-06-10 10:09:41.259672] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:35.791 [2024-06-10 10:09:41.307629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:35.791 [2024-06-10 10:09:41.307682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.791 [2024-06-10 10:09:41.307691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b602800 00:05:35.791 [2024-06-10 10:09:41.307699] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.791 [2024-06-10 10:09:41.308038] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.791 [2024-06-10 10:09:41.308057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:36.049 [2024-06-10 10:09:41.408520] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.408645] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.408713] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.408780] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.408845] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.408922] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409013] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409081] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409144] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409213] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409283] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409350] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409412] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409481] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409550] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.409620] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:36.049 [2024-06-10 10:09:41.410506] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:36.049 [2024-06-10 10:09:41.410635] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:36.049 Running I/O for 5 seconds... 00:05:41.314 00:05:41.314 Latency(us) 00:05:41.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:41.314 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x100 00:05:41.314 Malloc0 : 5.06 4366.55 272.91 0.00 0.00 29229.62 66.80 82887.42 00:05:41.314 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x100 length 0x100 00:05:41.314 Malloc0 : 5.06 4296.40 268.52 0.00 0.00 29707.02 66.32 101861.65 00:05:41.314 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x80 00:05:41.314 Malloc1p0 : 5.08 1114.16 69.64 0.00 0.00 114359.22 573.44 172765.35 00:05:41.314 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x80 length 0x80 00:05:41.314 Malloc1p0 : 5.08 1496.74 93.55 0.00 0.00 85115.93 725.58 128825.03 00:05:41.314 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x80 00:05:41.314 Malloc1p1 : 5.09 568.54 35.53 0.00 0.00 223722.40 315.98 289606.66 00:05:41.314 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x80 length 0x80 00:05:41.314 Malloc1p1 : 5.10 558.92 34.93 0.00 0.00 227580.73 304.27 273628.36 00:05:41.314 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p0 : 5.07 551.82 34.49 0.00 0.00 57577.77 245.76 103359.62 00:05:41.314 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p0 : 5.08 542.06 33.88 0.00 0.00 58611.15 228.21 91875.22 00:05:41.314 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p1 : 5.07 551.78 34.49 0.00 0.00 57554.99 222.35 102860.30 00:05:41.314 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p1 : 5.08 542.01 33.88 0.00 0.00 58594.24 223.33 90876.57 00:05:41.314 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p2 : 5.07 551.75 34.48 0.00 0.00 57532.44 251.61 101861.65 00:05:41.314 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p2 : 5.08 541.97 33.87 0.00 0.00 58575.91 230.16 89877.93 00:05:41.314 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p3 : 5.08 551.71 34.48 0.00 0.00 57516.74 233.08 100863.01 00:05:41.314 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p3 : 5.08 541.93 33.87 0.00 0.00 58549.65 249.66 89378.61 00:05:41.314 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p4 : 5.08 551.67 34.48 0.00 0.00 57499.73 236.01 100363.69 00:05:41.314 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p4 : 5.08 541.89 33.87 0.00 0.00 58533.02 234.06 88379.96 00:05:41.314 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p5 : 5.08 551.64 34.48 0.00 0.00 57477.34 223.33 99365.04 00:05:41.314 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p5 : 5.08 541.86 33.87 0.00 0.00 58511.67 220.40 87381.32 00:05:41.314 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p6 : 5.08 551.61 34.48 0.00 0.00 57459.55 271.12 98366.40 00:05:41.314 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x20 length 0x20 00:05:41.314 Malloc2p6 : 5.08 541.82 33.86 0.00 0.00 58497.65 223.33 86882.00 00:05:41.314 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:05:41.314 Verification LBA range: start 0x0 length 0x20 00:05:41.314 Malloc2p7 : 5.08 551.57 34.47 0.00 0.00 57438.00 241.86 97367.76 00:05:41.314 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x20 length 0x20 00:05:41.315 Malloc2p7 : 5.08 541.78 33.86 0.00 0.00 58473.30 227.23 85883.35 00:05:41.315 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x0 length 0x100 00:05:41.315 TestPT : 5.12 565.61 35.35 0.00 0.00 222860.48 3932.16 226692.11 00:05:41.315 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x100 length 0x100 00:05:41.315 TestPT : 5.18 238.70 14.92 0.00 0.00 526631.79 6428.77 635137.36 00:05:41.315 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x0 length 0x200 00:05:41.315 raid0 : 5.10 571.46 35.72 0.00 0.00 221213.79 327.68 269633.78 00:05:41.315 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x200 length 0x200 00:05:41.315 raid0 : 5.10 558.87 34.93 0.00 0.00 226263.18 358.89 253655.49 00:05:41.315 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x0 length 0x200 00:05:41.315 concat0 : 5.10 571.42 35.71 0.00 0.00 220875.24 347.18 263641.92 00:05:41.315 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x200 length 0x200 00:05:41.315 concat0 : 5.10 561.87 35.12 0.00 0.00 224794.05 323.78 252656.84 00:05:41.315 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x0 length 0x100 00:05:41.315 raid1 : 5.10 571.39 35.71 0.00 0.00 220527.26 436.91 254654.13 00:05:41.315 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x100 length 0x100 00:05:41.315 raid1 : 5.10 561.83 35.11 0.00 0.00 224433.25 423.25 252656.84 00:05:41.315 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x0 length 0x4e 00:05:41.315 AIO0 : 5.09 565.28 35.33 0.00 0.00 135679.50 682.67 155788.41 00:05:41.315 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:05:41.315 Verification LBA range: start 0x4e length 0x4e 00:05:41.315 AIO0 : 5.09 552.53 34.53 0.00 0.00 138857.63 526.63 151793.83 00:05:41.315 =================================================================================================================== 00:05:41.315 Total : 26469.14 1654.32 0.00 0.00 92242.13 66.32 635137.36 00:05:41.315 00:05:41.315 real 0m6.209s 00:05:41.315 user 0m11.224s 00:05:41.315 sys 0m0.521s 00:05:41.315 10:09:46 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:41.315 10:09:46 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 ************************************ 00:05:41.315 END TEST bdev_verify_big_io 00:05:41.315 ************************************ 00:05:41.315 10:09:46 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:41.315 10:09:46 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:05:41.315 10:09:46 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:41.315 10:09:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 ************************************ 00:05:41.315 START TEST bdev_write_zeroes 00:05:41.315 ************************************ 00:05:41.315 10:09:46 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:41.315 [2024-06-10 10:09:46.908053] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:41.315 [2024-06-10 10:09:46.908220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:41.881 EAL: TSC is not safe to use in SMP mode 00:05:41.881 EAL: TSC is not invariant 00:05:41.881 [2024-06-10 10:09:47.382446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.881 [2024-06-10 10:09:47.463241] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:41.881 [2024-06-10 10:09:47.465562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.138 [2024-06-10 10:09:47.522694] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:42.138 [2024-06-10 10:09:47.522756] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:42.138 [2024-06-10 10:09:47.530684] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:42.138 [2024-06-10 10:09:47.530721] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:42.138 [2024-06-10 10:09:47.538715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:42.138 [2024-06-10 10:09:47.538769] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:42.138 [2024-06-10 10:09:47.538784] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:42.138 [2024-06-10 10:09:47.586708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:42.138 [2024-06-10 10:09:47.586774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.138 [2024-06-10 10:09:47.586785] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa83800 00:05:42.138 [2024-06-10 10:09:47.586792] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.138 [2024-06-10 10:09:47.587187] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.138 [2024-06-10 10:09:47.587222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:42.138 Running I/O for 1 seconds... 00:05:43.510 00:05:43.510 Latency(us) 00:05:43.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:43.510 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.510 Malloc0 : 1.01 22186.81 86.67 0.00 0.00 5767.75 154.09 9237.45 00:05:43.510 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.510 Malloc1p0 : 1.01 22181.31 86.65 0.00 0.00 5767.36 171.64 8738.13 00:05:43.510 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.510 Malloc1p1 : 1.01 22178.26 86.63 0.00 0.00 5765.52 171.64 8675.72 00:05:43.510 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.510 Malloc2p0 : 1.01 22174.11 86.62 0.00 0.00 5765.44 174.57 8738.13 00:05:43.510 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.510 Malloc2p1 : 1.01 22171.39 86.61 0.00 0.00 5764.04 168.72 8675.72 00:05:43.510 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 Malloc2p2 : 1.01 22167.51 86.59 0.00 0.00 5763.22 168.72 8613.30 00:05:43.511 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 Malloc2p3 : 1.01 22164.98 86.58 0.00 0.00 5762.10 169.69 8613.30 00:05:43.511 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 Malloc2p4 : 1.01 22160.60 86.56 0.00 0.00 5760.82 169.69 8550.89 00:05:43.511 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 Malloc2p5 : 1.01 22158.25 86.56 0.00 0.00 5759.43 169.69 8550.89 00:05:43.511 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 Malloc2p6 : 1.01 22155.56 86.55 0.00 0.00 5757.62 180.42 8488.47 00:05:43.511 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 Malloc2p7 : 1.01 22151.70 86.53 0.00 0.00 5756.41 171.64 8488.47 00:05:43.511 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 TestPT : 1.01 22148.24 86.52 0.00 0.00 5755.40 172.62 8488.47 00:05:43.511 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 raid0 : 1.01 22144.58 86.50 0.00 0.00 5753.64 229.18 8488.47 00:05:43.511 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 concat0 : 1.01 22140.78 86.49 0.00 0.00 5752.23 232.11 8613.30 00:05:43.511 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 raid1 : 1.01 22135.61 86.47 0.00 0.00 5749.78 415.45 9237.45 00:05:43.511 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:05:43.511 AIO0 : 1.05 3348.25 13.08 0.00 0.00 37261.13 507.12 147799.26 00:05:43.511 =================================================================================================================== 00:05:43.511 Total : 335767.95 1311.59 0.00 0.00 6086.09 154.09 147799.26 00:05:43.511 00:05:43.511 real 0m2.096s 00:05:43.511 user 0m1.411s 00:05:43.511 sys 0m0.544s 00:05:43.511 10:09:48 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.511 10:09:48 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:05:43.511 ************************************ 00:05:43.511 END TEST bdev_write_zeroes 00:05:43.511 ************************************ 00:05:43.511 10:09:49 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:43.511 10:09:49 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:05:43.511 10:09:49 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.511 10:09:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:43.511 ************************************ 00:05:43.511 START TEST bdev_json_nonenclosed 00:05:43.511 ************************************ 00:05:43.511 10:09:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:43.511 [2024-06-10 10:09:49.040558] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:43.511 [2024-06-10 10:09:49.040788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:44.076 EAL: TSC is not safe to use in SMP mode 00:05:44.076 EAL: TSC is not invariant 00:05:44.076 [2024-06-10 10:09:49.505626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.076 [2024-06-10 10:09:49.589037] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:44.076 [2024-06-10 10:09:49.591308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.076 [2024-06-10 10:09:49.591350] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:05:44.076 [2024-06-10 10:09:49.591363] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:44.076 [2024-06-10 10:09:49.591371] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.335 00:05:44.335 real 0m0.677s 00:05:44.335 user 0m0.181s 00:05:44.335 sys 0m0.496s 00:05:44.335 10:09:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.335 10:09:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:05:44.335 ************************************ 00:05:44.335 END TEST bdev_json_nonenclosed 00:05:44.335 ************************************ 00:05:44.335 10:09:49 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:44.335 10:09:49 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:05:44.335 10:09:49 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.335 10:09:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:44.335 ************************************ 00:05:44.335 START TEST bdev_json_nonarray 00:05:44.335 ************************************ 00:05:44.335 10:09:49 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:05:44.335 [2024-06-10 10:09:49.767281] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:44.336 [2024-06-10 10:09:49.767510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:44.910 EAL: TSC is not safe to use in SMP mode 00:05:44.910 EAL: TSC is not invariant 00:05:44.910 [2024-06-10 10:09:50.254586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.910 [2024-06-10 10:09:50.336597] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:44.910 [2024-06-10 10:09:50.338861] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.910 [2024-06-10 10:09:50.338913] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:05:44.910 [2024-06-10 10:09:50.338934] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:05:44.910 [2024-06-10 10:09:50.338950] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.910 00:05:44.910 real 0m0.698s 00:05:44.910 user 0m0.170s 00:05:44.910 sys 0m0.527s 00:05:44.910 ************************************ 00:05:44.910 END TEST bdev_json_nonarray 00:05:44.910 ************************************ 00:05:44.910 10:09:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.910 10:09:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:05:44.910 10:09:50 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:05:44.910 10:09:50 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:05:44.910 10:09:50 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:05:44.910 10:09:50 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.910 10:09:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:44.910 ************************************ 00:05:44.911 START TEST bdev_qos 00:05:44.911 ************************************ 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # qos_test_suite '' 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48908 00:05:45.170 Process qos testing pid: 48908 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48908' 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48908 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@830 -- # '[' -z 48908 ']' 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:45.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:45.170 10:09:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:45.170 [2024-06-10 10:09:50.515927] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:05:45.170 [2024-06-10 10:09:50.516095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:45.737 EAL: TSC is not safe to use in SMP mode 00:05:45.737 EAL: TSC is not invariant 00:05:45.737 [2024-06-10 10:09:51.046702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.737 [2024-06-10 10:09:51.128287] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:45.737 [2024-06-10 10:09:51.130476] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@863 -- # return 0 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:46.305 Malloc_0 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_0 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local i 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:46.305 [ 00:05:46.305 { 00:05:46.305 "name": "Malloc_0", 00:05:46.305 "aliases": [ 00:05:46.305 "93a403c1-2711-11ef-b084-113036b5c18d" 00:05:46.305 ], 00:05:46.305 "product_name": "Malloc disk", 00:05:46.305 "block_size": 512, 00:05:46.305 "num_blocks": 262144, 00:05:46.305 "uuid": "93a403c1-2711-11ef-b084-113036b5c18d", 00:05:46.305 "assigned_rate_limits": { 00:05:46.305 "rw_ios_per_sec": 0, 00:05:46.305 "rw_mbytes_per_sec": 0, 00:05:46.305 "r_mbytes_per_sec": 0, 00:05:46.305 "w_mbytes_per_sec": 0 00:05:46.305 }, 00:05:46.305 "claimed": false, 00:05:46.305 "zoned": false, 00:05:46.305 "supported_io_types": { 00:05:46.305 "read": true, 00:05:46.305 "write": true, 00:05:46.305 "unmap": true, 00:05:46.305 "write_zeroes": true, 00:05:46.305 "flush": true, 00:05:46.305 "reset": true, 00:05:46.305 "compare": false, 00:05:46.305 "compare_and_write": false, 00:05:46.305 "abort": true, 00:05:46.305 "nvme_admin": false, 00:05:46.305 "nvme_io": false 00:05:46.305 }, 00:05:46.305 "memory_domains": [ 00:05:46.305 { 00:05:46.305 "dma_device_id": "system", 00:05:46.305 "dma_device_type": 1 00:05:46.305 }, 00:05:46.305 { 00:05:46.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.305 "dma_device_type": 2 00:05:46.305 } 00:05:46.305 ], 00:05:46.305 "driver_specific": {} 00:05:46.305 } 00:05:46.305 ] 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # return 0 00:05:46.305 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:46.306 Null_1 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_name=Null_1 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local i 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:46.306 [ 00:05:46.306 { 00:05:46.306 "name": "Null_1", 00:05:46.306 "aliases": [ 00:05:46.306 "93a98153-2711-11ef-b084-113036b5c18d" 00:05:46.306 ], 00:05:46.306 "product_name": "Null disk", 00:05:46.306 "block_size": 512, 00:05:46.306 "num_blocks": 262144, 00:05:46.306 "uuid": "93a98153-2711-11ef-b084-113036b5c18d", 00:05:46.306 "assigned_rate_limits": { 00:05:46.306 "rw_ios_per_sec": 0, 00:05:46.306 "rw_mbytes_per_sec": 0, 00:05:46.306 "r_mbytes_per_sec": 0, 00:05:46.306 "w_mbytes_per_sec": 0 00:05:46.306 }, 00:05:46.306 "claimed": false, 00:05:46.306 "zoned": false, 00:05:46.306 "supported_io_types": { 00:05:46.306 "read": true, 00:05:46.306 "write": true, 00:05:46.306 "unmap": false, 00:05:46.306 "write_zeroes": true, 00:05:46.306 "flush": false, 00:05:46.306 "reset": true, 00:05:46.306 "compare": false, 00:05:46.306 "compare_and_write": false, 00:05:46.306 "abort": true, 00:05:46.306 "nvme_admin": false, 00:05:46.306 "nvme_io": false 00:05:46.306 }, 00:05:46.306 "driver_specific": {} 00:05:46.306 } 00:05:46.306 ] 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # return 0 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:05:46.306 10:09:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:05:46.306 Running I/O for 60 seconds... 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 706933.89 2827735.55 0.00 0.00 3033088.00 0.00 0.00 ' 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=706933.89 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 706933 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=706933 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=176000 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 176000 -gt 1000 ']' 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 176000 Malloc_0 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 176000 IOPS Malloc_0 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:52.904 10:09:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:05:52.904 ************************************ 00:05:52.904 START TEST bdev_qos_iops 00:05:52.904 ************************************ 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # run_qos_test 176000 IOPS Malloc_0 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=176000 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:05:52.904 10:09:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 175893.23 703572.90 0.00 0.00 727744.00 0.00 0.00 ' 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=175893.23 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 175893 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=175893 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=158400 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=193600 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 175893 -lt 158400 ']' 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 175893 -gt 193600 ']' 00:05:58.180 00:05:58.180 real 0m5.423s 00:05:58.180 user 0m0.162s 00:05:58.180 sys 0m0.002s 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:58.180 ************************************ 00:05:58.180 END TEST bdev_qos_iops 00:05:58.180 ************************************ 00:05:58.180 10:10:02 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:05:58.180 10:10:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 478653.17 1914612.67 0.00 0.00 2067456.00 0.00 0.00 ' 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=2067456.00 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 2067456 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=2067456 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=201 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 201 -lt 2 ']' 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 201 Null_1 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 201 BANDWIDTH Null_1 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.448 10:10:08 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:03.448 ************************************ 00:06:03.448 START TEST bdev_qos_bw 00:06:03.448 ************************************ 00:06:03.448 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # run_qos_test 201 BANDWIDTH Null_1 00:06:03.448 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=201 00:06:03.448 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:03.448 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:06:03.449 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:03.449 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:03.449 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:03.449 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:03.449 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:03.449 10:10:08 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 51492.54 205970.16 0.00 0.00 221468.00 0.00 0.00 ' 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=221468.00 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 221468 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=221468 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=205824 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=185241 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=226406 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 221468 -lt 185241 ']' 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 221468 -gt 226406 ']' 00:06:08.715 00:06:08.715 real 0m5.455s 00:06:08.715 user 0m0.141s 00:06:08.715 sys 0m0.018s 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:06:08.715 ************************************ 00:06:08.715 END TEST bdev_qos_bw 00:06:08.715 ************************************ 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.715 10:10:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:08.715 ************************************ 00:06:08.715 START TEST bdev_qos_ro_bw 00:06:08.715 ************************************ 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:08.715 10:10:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.66 2050.66 0.00 0.00 2216.00 0.00 0.00 ' 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2216.00 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2216 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2216 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2216 -lt 1843 ']' 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2216 -gt 2252 ']' 00:06:13.980 00:06:13.980 real 0m5.529s 00:06:13.980 user 0m0.145s 00:06:13.980 sys 0m0.018s 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.980 10:10:19 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:06:13.980 ************************************ 00:06:13.980 END TEST bdev_qos_ro_bw 00:06:13.980 ************************************ 00:06:13.980 10:10:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:13.980 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.980 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:14.546 00:06:14.546 Latency(us) 00:06:14.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.546 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:14.546 Malloc_0 : 28.01 240133.63 938.02 0.00 0.00 1056.42 306.22 503316.40 00:06:14.546 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:14.546 Null_1 : 28.03 348308.12 1360.58 0.00 0.00 734.68 52.66 24217.11 00:06:14.546 =================================================================================================================== 00:06:14.546 Total : 588441.75 2298.60 0.00 0.00 865.90 52.66 503316.40 00:06:14.546 0 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48908 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@949 -- # '[' -z 48908 ']' 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # kill -0 48908 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # uname 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@957 -- # ps -c -o command 48908 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@957 -- # tail -1 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:06:14.546 killing process with pid 48908 00:06:14.546 Received shutdown signal, test time was about 28.051204 seconds 00:06:14.546 00:06:14.546 Latency(us) 00:06:14.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.546 =================================================================================================================== 00:06:14.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # echo 'killing process with pid 48908' 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # kill 48908 00:06:14.546 10:10:19 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # wait 48908 00:06:14.546 10:10:20 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:06:14.546 00:06:14.546 real 0m29.580s 00:06:14.546 user 0m30.396s 00:06:14.546 sys 0m0.886s 00:06:14.546 ************************************ 00:06:14.546 END TEST bdev_qos 00:06:14.546 ************************************ 00:06:14.546 10:10:20 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.546 10:10:20 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:14.546 10:10:20 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:14.546 10:10:20 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:14.546 10:10:20 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.546 10:10:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:14.546 ************************************ 00:06:14.546 START TEST bdev_qd_sampling 00:06:14.546 ************************************ 00:06:14.546 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # qd_sampling_test_suite '' 00:06:14.546 10:10:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:06:14.546 10:10:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=49133 00:06:14.546 Process bdev QD sampling period testing pid: 49133 00:06:14.546 10:10:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 49133' 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 49133 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@830 -- # '[' -z 49133 ']' 00:06:14.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.547 10:10:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:14.547 [2024-06-10 10:10:20.142587] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:14.547 [2024-06-10 10:10:20.142899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:15.113 EAL: TSC is not safe to use in SMP mode 00:06:15.113 EAL: TSC is not invariant 00:06:15.113 [2024-06-10 10:10:20.651236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.370 [2024-06-10 10:10:20.750506] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:15.370 [2024-06-10 10:10:20.750568] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:15.370 [2024-06-10 10:10:20.753846] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.370 [2024-06-10 10:10:20.753836] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@863 -- # return 0 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:15.936 Malloc_QD 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_QD 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local i 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:15.936 [ 00:06:15.936 { 00:06:15.936 "name": "Malloc_QD", 00:06:15.936 "aliases": [ 00:06:15.936 "a543d228-2711-11ef-b084-113036b5c18d" 00:06:15.936 ], 00:06:15.936 "product_name": "Malloc disk", 00:06:15.936 "block_size": 512, 00:06:15.936 "num_blocks": 262144, 00:06:15.936 "uuid": "a543d228-2711-11ef-b084-113036b5c18d", 00:06:15.936 "assigned_rate_limits": { 00:06:15.936 "rw_ios_per_sec": 0, 00:06:15.936 "rw_mbytes_per_sec": 0, 00:06:15.936 "r_mbytes_per_sec": 0, 00:06:15.936 "w_mbytes_per_sec": 0 00:06:15.936 }, 00:06:15.936 "claimed": false, 00:06:15.936 "zoned": false, 00:06:15.936 "supported_io_types": { 00:06:15.936 "read": true, 00:06:15.936 "write": true, 00:06:15.936 "unmap": true, 00:06:15.936 "write_zeroes": true, 00:06:15.936 "flush": true, 00:06:15.936 "reset": true, 00:06:15.936 "compare": false, 00:06:15.936 "compare_and_write": false, 00:06:15.936 "abort": true, 00:06:15.936 "nvme_admin": false, 00:06:15.936 "nvme_io": false 00:06:15.936 }, 00:06:15.936 "memory_domains": [ 00:06:15.936 { 00:06:15.936 "dma_device_id": "system", 00:06:15.936 "dma_device_type": 1 00:06:15.936 }, 00:06:15.936 { 00:06:15.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:15.936 "dma_device_type": 2 00:06:15.936 } 00:06:15.936 ], 00:06:15.936 "driver_specific": {} 00:06:15.936 } 00:06:15.936 ] 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # return 0 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:06:15.936 10:10:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:15.937 Running I/O for 5 seconds... 00:06:17.836 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:06:17.836 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:06:17.836 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:06:17.836 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:06:17.836 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:06:17.837 "tick_rate": 2100000351, 00:06:17.837 "ticks": 634755761580, 00:06:17.837 "bdevs": [ 00:06:17.837 { 00:06:17.837 "name": "Malloc_QD", 00:06:17.837 "bytes_read": 15772717568, 00:06:17.837 "num_read_ops": 3850755, 00:06:17.837 "bytes_written": 0, 00:06:17.837 "num_write_ops": 0, 00:06:17.837 "bytes_unmapped": 0, 00:06:17.837 "num_unmap_ops": 0, 00:06:17.837 "bytes_copied": 0, 00:06:17.837 "num_copy_ops": 0, 00:06:17.837 "read_latency_ticks": 2181805645526, 00:06:17.837 "max_read_latency_ticks": 1200776, 00:06:17.837 "min_read_latency_ticks": 36718, 00:06:17.837 "write_latency_ticks": 0, 00:06:17.837 "max_write_latency_ticks": 0, 00:06:17.837 "min_write_latency_ticks": 0, 00:06:17.837 "unmap_latency_ticks": 0, 00:06:17.837 "max_unmap_latency_ticks": 0, 00:06:17.837 "min_unmap_latency_ticks": 0, 00:06:17.837 "copy_latency_ticks": 0, 00:06:17.837 "max_copy_latency_ticks": 0, 00:06:17.837 "min_copy_latency_ticks": 0, 00:06:17.837 "io_error": {}, 00:06:17.837 "queue_depth_polling_period": 10, 00:06:17.837 "queue_depth": 512, 00:06:17.837 "io_time": 340, 00:06:17.837 "weighted_io_time": 174080 00:06:17.837 } 00:06:17.837 ] 00:06:17.837 }' 00:06:17.837 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:18.095 00:06:18.095 Latency(us) 00:06:18.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.095 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:18.095 Malloc_QD : 2.06 932483.19 3642.51 0.00 0.00 274.29 51.93 573.44 00:06:18.095 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:18.095 Malloc_QD : 2.06 962701.80 3760.55 0.00 0.00 265.67 50.47 507.12 00:06:18.095 =================================================================================================================== 00:06:18.095 Total : 1895184.99 7403.07 0.00 0.00 269.91 50.47 573.44 00:06:18.095 0 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 49133 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@949 -- # '[' -z 49133 ']' 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # kill -0 49133 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # uname 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@957 -- # tail -1 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@957 -- # ps -c -o command 49133 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:06:18.095 killing process with pid 49133 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49133' 00:06:18.095 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # kill 49133 00:06:18.095 Received shutdown signal, test time was about 2.088333 seconds 00:06:18.095 00:06:18.096 Latency(us) 00:06:18.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.096 =================================================================================================================== 00:06:18.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:18.096 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # wait 49133 00:06:18.096 10:10:23 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:06:18.096 00:06:18.096 real 0m3.501s 00:06:18.096 user 0m6.370s 00:06:18.096 sys 0m0.660s 00:06:18.096 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.096 10:10:23 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:18.096 ************************************ 00:06:18.096 END TEST bdev_qd_sampling 00:06:18.096 ************************************ 00:06:18.096 10:10:23 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:06:18.096 10:10:23 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:18.096 10:10:23 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.096 10:10:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:18.096 ************************************ 00:06:18.096 START TEST bdev_error 00:06:18.096 ************************************ 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # error_test_suite '' 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=49176 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 49176' 00:06:18.096 Process error testing pid: 49176 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:18.096 10:10:23 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 49176 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # '[' -z 49176 ']' 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:18.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:18.096 10:10:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:18.096 [2024-06-10 10:10:23.687985] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:18.096 [2024-06-10 10:10:23.688177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:18.662 EAL: TSC is not safe to use in SMP mode 00:06:18.662 EAL: TSC is not invariant 00:06:18.662 [2024-06-10 10:10:24.177297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.922 [2024-06-10 10:10:24.269749] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:18.922 [2024-06-10 10:10:24.272401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@863 -- # return 0 00:06:19.490 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 Dev_1 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_1 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 [ 00:06:19.490 { 00:06:19.490 "name": "Dev_1", 00:06:19.490 "aliases": [ 00:06:19.490 "a7610afc-2711-11ef-b084-113036b5c18d" 00:06:19.490 ], 00:06:19.490 "product_name": "Malloc disk", 00:06:19.490 "block_size": 512, 00:06:19.490 "num_blocks": 262144, 00:06:19.490 "uuid": "a7610afc-2711-11ef-b084-113036b5c18d", 00:06:19.490 "assigned_rate_limits": { 00:06:19.490 "rw_ios_per_sec": 0, 00:06:19.490 "rw_mbytes_per_sec": 0, 00:06:19.490 "r_mbytes_per_sec": 0, 00:06:19.490 "w_mbytes_per_sec": 0 00:06:19.490 }, 00:06:19.490 "claimed": false, 00:06:19.490 "zoned": false, 00:06:19.490 "supported_io_types": { 00:06:19.490 "read": true, 00:06:19.490 "write": true, 00:06:19.490 "unmap": true, 00:06:19.490 "write_zeroes": true, 00:06:19.490 "flush": true, 00:06:19.490 "reset": true, 00:06:19.490 "compare": false, 00:06:19.490 "compare_and_write": false, 00:06:19.490 "abort": true, 00:06:19.490 "nvme_admin": false, 00:06:19.490 "nvme_io": false 00:06:19.490 }, 00:06:19.490 "memory_domains": [ 00:06:19.490 { 00:06:19.490 "dma_device_id": "system", 00:06:19.490 "dma_device_type": 1 00:06:19.490 }, 00:06:19.490 { 00:06:19.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.490 "dma_device_type": 2 00:06:19.490 } 00:06:19.490 ], 00:06:19.490 "driver_specific": {} 00:06:19.490 } 00:06:19.490 ] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:06:19.490 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 true 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 Dev_2 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_2 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.490 [ 00:06:19.490 { 00:06:19.490 "name": "Dev_2", 00:06:19.490 "aliases": [ 00:06:19.490 "a7672449-2711-11ef-b084-113036b5c18d" 00:06:19.490 ], 00:06:19.490 "product_name": "Malloc disk", 00:06:19.490 "block_size": 512, 00:06:19.490 "num_blocks": 262144, 00:06:19.490 "uuid": "a7672449-2711-11ef-b084-113036b5c18d", 00:06:19.490 "assigned_rate_limits": { 00:06:19.490 "rw_ios_per_sec": 0, 00:06:19.490 "rw_mbytes_per_sec": 0, 00:06:19.490 "r_mbytes_per_sec": 0, 00:06:19.490 "w_mbytes_per_sec": 0 00:06:19.490 }, 00:06:19.490 "claimed": false, 00:06:19.490 "zoned": false, 00:06:19.490 "supported_io_types": { 00:06:19.490 "read": true, 00:06:19.490 "write": true, 00:06:19.490 "unmap": true, 00:06:19.490 "write_zeroes": true, 00:06:19.490 "flush": true, 00:06:19.490 "reset": true, 00:06:19.490 "compare": false, 00:06:19.490 "compare_and_write": false, 00:06:19.490 "abort": true, 00:06:19.490 "nvme_admin": false, 00:06:19.490 "nvme_io": false 00:06:19.490 }, 00:06:19.490 "memory_domains": [ 00:06:19.490 { 00:06:19.490 "dma_device_id": "system", 00:06:19.490 "dma_device_type": 1 00:06:19.490 }, 00:06:19.490 { 00:06:19.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:19.490 "dma_device_type": 2 00:06:19.490 } 00:06:19.490 ], 00:06:19.490 "driver_specific": {} 00:06:19.490 } 00:06:19.490 ] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:06:19.490 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:19.490 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:19.491 10:10:24 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:19.491 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:06:19.491 10:10:24 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:19.491 Running I/O for 5 seconds... 00:06:20.425 10:10:25 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 49176 00:06:20.425 Process is existed as continue on error is set. Pid: 49176 00:06:20.425 10:10:25 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 49176' 00:06:20.425 10:10:25 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:20.426 10:10:25 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:20.426 10:10:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:20.426 10:10:25 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:20.426 10:10:25 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:20.426 10:10:25 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:20.426 10:10:25 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:20.426 10:10:25 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:20.426 10:10:25 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:06:20.682 Timeout while waiting for response: 00:06:20.682 00:06:20.682 00:06:24.864 00:06:24.864 Latency(us) 00:06:24.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.864 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:24.864 EE_Dev_1 : 0.96 396672.66 1549.50 5.23 0.00 40.13 18.41 116.05 00:06:24.864 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:24.864 Dev_2 : 5.00 726005.41 2835.96 0.00 0.00 21.79 5.58 18724.57 00:06:24.864 =================================================================================================================== 00:06:24.864 Total : 1122678.06 4385.46 5.23 0.00 23.53 5.58 18724.57 00:06:25.430 10:10:31 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 49176 00:06:25.430 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@949 -- # '[' -z 49176 ']' 00:06:25.430 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # kill -0 49176 00:06:25.430 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # uname 00:06:25.430 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@957 -- # ps -c -o command 49176 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@957 -- # tail -1 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:06:25.687 killing process with pid 49176 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49176' 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # kill 49176 00:06:25.687 Received shutdown signal, test time was about 5.000000 seconds 00:06:25.687 00:06:25.687 Latency(us) 00:06:25.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.687 =================================================================================================================== 00:06:25.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # wait 49176 00:06:25.687 10:10:31 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=49216 00:06:25.687 Process error testing pid: 49216 00:06:25.687 10:10:31 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 49216' 00:06:25.687 10:10:31 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:06:25.687 10:10:31 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 49216 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@830 -- # '[' -z 49216 ']' 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:25.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:25.687 10:10:31 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:25.687 [2024-06-10 10:10:31.228420] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:25.687 [2024-06-10 10:10:31.228634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:26.254 EAL: TSC is not safe to use in SMP mode 00:06:26.254 EAL: TSC is not invariant 00:06:26.254 [2024-06-10 10:10:31.688510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.254 [2024-06-10 10:10:31.767818] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:26.254 [2024-06-10 10:10:31.769794] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@863 -- # return 0 00:06:26.822 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 Dev_1 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_1 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 [ 00:06:26.822 { 00:06:26.822 "name": "Dev_1", 00:06:26.822 "aliases": [ 00:06:26.822 "abdc4f3e-2711-11ef-b084-113036b5c18d" 00:06:26.822 ], 00:06:26.822 "product_name": "Malloc disk", 00:06:26.822 "block_size": 512, 00:06:26.822 "num_blocks": 262144, 00:06:26.822 "uuid": "abdc4f3e-2711-11ef-b084-113036b5c18d", 00:06:26.822 "assigned_rate_limits": { 00:06:26.822 "rw_ios_per_sec": 0, 00:06:26.822 "rw_mbytes_per_sec": 0, 00:06:26.822 "r_mbytes_per_sec": 0, 00:06:26.822 "w_mbytes_per_sec": 0 00:06:26.822 }, 00:06:26.822 "claimed": false, 00:06:26.822 "zoned": false, 00:06:26.822 "supported_io_types": { 00:06:26.822 "read": true, 00:06:26.822 "write": true, 00:06:26.822 "unmap": true, 00:06:26.822 "write_zeroes": true, 00:06:26.822 "flush": true, 00:06:26.822 "reset": true, 00:06:26.822 "compare": false, 00:06:26.822 "compare_and_write": false, 00:06:26.822 "abort": true, 00:06:26.822 "nvme_admin": false, 00:06:26.822 "nvme_io": false 00:06:26.822 }, 00:06:26.822 "memory_domains": [ 00:06:26.822 { 00:06:26.822 "dma_device_id": "system", 00:06:26.822 "dma_device_type": 1 00:06:26.822 }, 00:06:26.822 { 00:06:26.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.822 "dma_device_type": 2 00:06:26.822 } 00:06:26.822 ], 00:06:26.822 "driver_specific": {} 00:06:26.822 } 00:06:26.822 ] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:06:26.822 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 true 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 Dev_2 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_name=Dev_2 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local i 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.822 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.822 [ 00:06:26.822 { 00:06:26.822 "name": "Dev_2", 00:06:26.822 "aliases": [ 00:06:26.822 "abe305c3-2711-11ef-b084-113036b5c18d" 00:06:26.822 ], 00:06:26.822 "product_name": "Malloc disk", 00:06:26.822 "block_size": 512, 00:06:26.823 "num_blocks": 262144, 00:06:26.823 "uuid": "abe305c3-2711-11ef-b084-113036b5c18d", 00:06:26.823 "assigned_rate_limits": { 00:06:26.823 "rw_ios_per_sec": 0, 00:06:26.823 "rw_mbytes_per_sec": 0, 00:06:26.823 "r_mbytes_per_sec": 0, 00:06:26.823 "w_mbytes_per_sec": 0 00:06:26.823 }, 00:06:26.823 "claimed": false, 00:06:26.823 "zoned": false, 00:06:26.823 "supported_io_types": { 00:06:26.823 "read": true, 00:06:26.823 "write": true, 00:06:26.823 "unmap": true, 00:06:26.823 "write_zeroes": true, 00:06:26.823 "flush": true, 00:06:26.823 "reset": true, 00:06:26.823 "compare": false, 00:06:26.823 "compare_and_write": false, 00:06:26.823 "abort": true, 00:06:26.823 "nvme_admin": false, 00:06:26.823 "nvme_io": false 00:06:26.823 }, 00:06:26.823 "memory_domains": [ 00:06:26.823 { 00:06:26.823 "dma_device_id": "system", 00:06:26.823 "dma_device_type": 1 00:06:26.823 }, 00:06:26.823 { 00:06:26.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.823 "dma_device_type": 2 00:06:26.823 } 00:06:26.823 ], 00:06:26.823 "driver_specific": {} 00:06:26.823 } 00:06:26.823 ] 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # return 0 00:06:26.823 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.823 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 49216 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@649 -- # local es=0 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # valid_exec_arg wait 49216 00:06:26.823 10:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@637 -- # local arg=wait 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # type -t wait 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:26.823 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # wait 49216 00:06:27.081 Running I/O for 5 seconds... 00:06:27.081 task offset: 71360 on job bdev=EE_Dev_1 fails 00:06:27.081 00:06:27.081 Latency(us) 00:06:27.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.081 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:27.081 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:06:27.081 EE_Dev_1 : 0.00 222222.22 868.06 50505.05 0.00 48.64 17.43 91.18 00:06:27.081 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:27.081 Dev_2 : 0.00 266666.67 1041.67 0.00 0.00 29.10 21.70 39.98 00:06:27.081 =================================================================================================================== 00:06:27.081 Total : 488888.89 1909.72 50505.05 0.00 38.04 17.43 91.18 00:06:27.081 [2024-06-10 10:10:32.470074] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.081 request: 00:06:27.081 { 00:06:27.081 "method": "perform_tests", 00:06:27.081 "req_id": 1 00:06:27.081 } 00:06:27.081 Got JSON-RPC error response 00:06:27.081 response: 00:06:27.081 { 00:06:27.081 "code": -32603, 00:06:27.081 "message": "bdevperf failed with error Operation not permitted" 00:06:27.081 } 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # es=255 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # es=127 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # case "$es" in 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@669 -- # es=1 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:27.081 00:06:27.081 real 0m8.994s 00:06:27.081 user 0m9.212s 00:06:27.081 sys 0m1.175s 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:27.081 10:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:27.081 ************************************ 00:06:27.081 END TEST bdev_error 00:06:27.081 ************************************ 00:06:27.339 10:10:32 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:06:27.339 10:10:32 blockdev_general -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:27.339 10:10:32 blockdev_general -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:27.339 10:10:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:27.339 ************************************ 00:06:27.339 START TEST bdev_stat 00:06:27.339 ************************************ 00:06:27.339 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # stat_test_suite '' 00:06:27.339 10:10:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:06:27.339 10:10:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=49247 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 49247' 00:06:27.340 Process Bdev IO statistics testing pid: 49247 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 49247 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@830 -- # '[' -z 49247 ']' 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:27.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:27.340 10:10:32 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:27.340 [2024-06-10 10:10:32.733830] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:27.340 [2024-06-10 10:10:32.734110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:27.907 EAL: TSC is not safe to use in SMP mode 00:06:27.907 EAL: TSC is not invariant 00:06:27.907 [2024-06-10 10:10:33.490412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.166 [2024-06-10 10:10:33.582530] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:28.166 [2024-06-10 10:10:33.582598] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:28.166 [2024-06-10 10:10:33.585970] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.166 [2024-06-10 10:10:33.585959] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@863 -- # return 0 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:28.166 Malloc_STAT 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_name=Malloc_STAT 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local i 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # rpc_cmd bdev_wait_for_examine 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:28.166 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:28.425 [ 00:06:28.425 { 00:06:28.425 "name": "Malloc_STAT", 00:06:28.425 "aliases": [ 00:06:28.425 "acb900b5-2711-11ef-b084-113036b5c18d" 00:06:28.425 ], 00:06:28.425 "product_name": "Malloc disk", 00:06:28.425 "block_size": 512, 00:06:28.425 "num_blocks": 262144, 00:06:28.425 "uuid": "acb900b5-2711-11ef-b084-113036b5c18d", 00:06:28.425 "assigned_rate_limits": { 00:06:28.425 "rw_ios_per_sec": 0, 00:06:28.425 "rw_mbytes_per_sec": 0, 00:06:28.425 "r_mbytes_per_sec": 0, 00:06:28.425 "w_mbytes_per_sec": 0 00:06:28.425 }, 00:06:28.425 "claimed": false, 00:06:28.425 "zoned": false, 00:06:28.425 "supported_io_types": { 00:06:28.425 "read": true, 00:06:28.425 "write": true, 00:06:28.425 "unmap": true, 00:06:28.425 "write_zeroes": true, 00:06:28.425 "flush": true, 00:06:28.425 "reset": true, 00:06:28.425 "compare": false, 00:06:28.425 "compare_and_write": false, 00:06:28.425 "abort": true, 00:06:28.425 "nvme_admin": false, 00:06:28.425 "nvme_io": false 00:06:28.425 }, 00:06:28.425 "memory_domains": [ 00:06:28.425 { 00:06:28.425 "dma_device_id": "system", 00:06:28.425 "dma_device_type": 1 00:06:28.425 }, 00:06:28.425 { 00:06:28.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.425 "dma_device_type": 2 00:06:28.425 } 00:06:28.425 ], 00:06:28.425 "driver_specific": {} 00:06:28.425 } 00:06:28.425 ] 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # return 0 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:06:28.425 10:10:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:28.425 Running I/O for 10 seconds... 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:06:30.325 "tick_rate": 2100000351, 00:06:30.325 "ticks": 660770422492, 00:06:30.325 "bdevs": [ 00:06:30.325 { 00:06:30.325 "name": "Malloc_STAT", 00:06:30.325 "bytes_read": 14834242048, 00:06:30.325 "num_read_ops": 3621635, 00:06:30.325 "bytes_written": 0, 00:06:30.325 "num_write_ops": 0, 00:06:30.325 "bytes_unmapped": 0, 00:06:30.325 "num_unmap_ops": 0, 00:06:30.325 "bytes_copied": 0, 00:06:30.325 "num_copy_ops": 0, 00:06:30.325 "read_latency_ticks": 2039797230130, 00:06:30.325 "max_read_latency_ticks": 940024, 00:06:30.325 "min_read_latency_ticks": 32790, 00:06:30.325 "write_latency_ticks": 0, 00:06:30.325 "max_write_latency_ticks": 0, 00:06:30.325 "min_write_latency_ticks": 0, 00:06:30.325 "unmap_latency_ticks": 0, 00:06:30.325 "max_unmap_latency_ticks": 0, 00:06:30.325 "min_unmap_latency_ticks": 0, 00:06:30.325 "copy_latency_ticks": 0, 00:06:30.325 "max_copy_latency_ticks": 0, 00:06:30.325 "min_copy_latency_ticks": 0, 00:06:30.325 "io_error": {} 00:06:30.325 } 00:06:30.325 ] 00:06:30.325 }' 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3621635 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:06:30.325 "tick_rate": 2100000351, 00:06:30.325 "ticks": 660830253106, 00:06:30.325 "name": "Malloc_STAT", 00:06:30.325 "channels": [ 00:06:30.325 { 00:06:30.325 "thread_id": 2, 00:06:30.325 "bytes_read": 7504658432, 00:06:30.325 "num_read_ops": 1832192, 00:06:30.325 "bytes_written": 0, 00:06:30.325 "num_write_ops": 0, 00:06:30.325 "bytes_unmapped": 0, 00:06:30.325 "num_unmap_ops": 0, 00:06:30.325 "bytes_copied": 0, 00:06:30.325 "num_copy_ops": 0, 00:06:30.325 "read_latency_ticks": 1035199574756, 00:06:30.325 "max_read_latency_ticks": 940024, 00:06:30.325 "min_read_latency_ticks": 524874, 00:06:30.325 "write_latency_ticks": 0, 00:06:30.325 "max_write_latency_ticks": 0, 00:06:30.325 "min_write_latency_ticks": 0, 00:06:30.325 "unmap_latency_ticks": 0, 00:06:30.325 "max_unmap_latency_ticks": 0, 00:06:30.325 "min_unmap_latency_ticks": 0, 00:06:30.325 "copy_latency_ticks": 0, 00:06:30.325 "max_copy_latency_ticks": 0, 00:06:30.325 "min_copy_latency_ticks": 0 00:06:30.325 }, 00:06:30.325 { 00:06:30.325 "thread_id": 3, 00:06:30.325 "bytes_read": 7549747200, 00:06:30.325 "num_read_ops": 1843200, 00:06:30.325 "bytes_written": 0, 00:06:30.325 "num_write_ops": 0, 00:06:30.325 "bytes_unmapped": 0, 00:06:30.325 "num_unmap_ops": 0, 00:06:30.325 "bytes_copied": 0, 00:06:30.325 "num_copy_ops": 0, 00:06:30.325 "read_latency_ticks": 1035343620750, 00:06:30.325 "max_read_latency_ticks": 791750, 00:06:30.325 "min_read_latency_ticks": 525750, 00:06:30.325 "write_latency_ticks": 0, 00:06:30.325 "max_write_latency_ticks": 0, 00:06:30.325 "min_write_latency_ticks": 0, 00:06:30.325 "unmap_latency_ticks": 0, 00:06:30.325 "max_unmap_latency_ticks": 0, 00:06:30.325 "min_unmap_latency_ticks": 0, 00:06:30.325 "copy_latency_ticks": 0, 00:06:30.325 "max_copy_latency_ticks": 0, 00:06:30.325 "min_copy_latency_ticks": 0 00:06:30.325 } 00:06:30.325 ] 00:06:30.325 }' 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1832192 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1832192 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1843200 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3675392 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:30.325 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:06:30.326 "tick_rate": 2100000351, 00:06:30.326 "ticks": 660909079152, 00:06:30.326 "bdevs": [ 00:06:30.326 { 00:06:30.326 "name": "Malloc_STAT", 00:06:30.326 "bytes_read": 15346995712, 00:06:30.326 "num_read_ops": 3746819, 00:06:30.326 "bytes_written": 0, 00:06:30.326 "num_write_ops": 0, 00:06:30.326 "bytes_unmapped": 0, 00:06:30.326 "num_unmap_ops": 0, 00:06:30.326 "bytes_copied": 0, 00:06:30.326 "num_copy_ops": 0, 00:06:30.326 "read_latency_ticks": 2110822504016, 00:06:30.326 "max_read_latency_ticks": 940024, 00:06:30.326 "min_read_latency_ticks": 32790, 00:06:30.326 "write_latency_ticks": 0, 00:06:30.326 "max_write_latency_ticks": 0, 00:06:30.326 "min_write_latency_ticks": 0, 00:06:30.326 "unmap_latency_ticks": 0, 00:06:30.326 "max_unmap_latency_ticks": 0, 00:06:30.326 "min_unmap_latency_ticks": 0, 00:06:30.326 "copy_latency_ticks": 0, 00:06:30.326 "max_copy_latency_ticks": 0, 00:06:30.326 "min_copy_latency_ticks": 0, 00:06:30.326 "io_error": {} 00:06:30.326 } 00:06:30.326 ] 00:06:30.326 }' 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3746819 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3675392 -lt 3621635 ']' 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3675392 -gt 3746819 ']' 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:30.326 00:06:30.326 Latency(us) 00:06:30.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:30.326 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:30.326 Malloc_STAT : 1.99 950733.08 3713.80 0.00 0.00 269.03 46.57 448.61 00:06:30.326 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:30.326 Malloc_STAT : 1.99 956345.34 3735.72 0.00 0.00 267.44 42.91 378.39 00:06:30.326 =================================================================================================================== 00:06:30.326 Total : 1907078.42 7449.53 0.00 0.00 268.24 42.91 448.61 00:06:30.326 0 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 49247 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@949 -- # '[' -z 49247 ']' 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # kill -0 49247 00:06:30.326 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # uname 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@957 -- # ps -c -o command 49247 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@957 -- # tail -1 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:06:30.584 killing process with pid 49247 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49247' 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # kill 49247 00:06:30.584 Received shutdown signal, test time was about 2.027067 seconds 00:06:30.584 00:06:30.584 Latency(us) 00:06:30.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:30.584 =================================================================================================================== 00:06:30.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:30.584 10:10:35 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # wait 49247 00:06:30.584 10:10:36 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:06:30.584 00:06:30.584 real 0m3.380s 00:06:30.584 user 0m5.545s 00:06:30.584 sys 0m0.943s 00:06:30.584 10:10:36 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.584 ************************************ 00:06:30.584 END TEST bdev_stat 00:06:30.584 ************************************ 00:06:30.584 10:10:36 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:06:30.584 10:10:36 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:06:30.584 00:06:30.584 real 1m31.899s 00:06:30.584 user 4m29.465s 00:06:30.584 sys 0m25.368s 00:06:30.584 10:10:36 blockdev_general -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:30.584 10:10:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:30.584 ************************************ 00:06:30.584 END TEST blockdev_general 00:06:30.584 ************************************ 00:06:30.844 10:10:36 -- spdk/autotest.sh@190 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:30.844 10:10:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.844 10:10:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.844 10:10:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.844 ************************************ 00:06:30.844 START TEST bdev_raid 00:06:30.844 ************************************ 00:06:30.844 10:10:36 bdev_raid -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:30.844 * Looking for test storage... 00:06:30.844 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:30.844 10:10:36 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:06:30.844 10:10:36 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:06:30.844 10:10:36 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:30.844 10:10:36 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:30.844 10:10:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.844 ************************************ 00:06:30.844 START TEST raid0_resize_test 00:06:30.844 ************************************ 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # raid0_resize_test 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:06:30.844 Process raid pid: 49348 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=49348 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 49348' 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 49348 /var/tmp/spdk-raid.sock 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@830 -- # '[' -z 49348 ']' 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:30.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:30.844 10:10:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.844 [2024-06-10 10:10:36.433318] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:30.844 [2024-06-10 10:10:36.433485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:31.410 EAL: TSC is not safe to use in SMP mode 00:06:31.410 EAL: TSC is not invariant 00:06:31.410 [2024-06-10 10:10:36.917661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.410 [2024-06-10 10:10:36.994604] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:31.410 [2024-06-10 10:10:36.996667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.410 [2024-06-10 10:10:36.997350] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.410 [2024-06-10 10:10:36.997362] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.976 10:10:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:31.976 10:10:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@863 -- # return 0 00:06:31.976 10:10:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:32.235 Base_1 00:06:32.235 10:10:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:32.493 Base_2 00:06:32.493 10:10:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:32.750 [2024-06-10 10:10:38.263675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:32.750 [2024-06-10 10:10:38.264129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:32.750 [2024-06-10 10:10:38.264155] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c4b4a00 00:06:32.750 [2024-06-10 10:10:38.264159] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:32.750 [2024-06-10 10:10:38.264192] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c517e20 00:06:32.750 [2024-06-10 10:10:38.264245] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c4b4a00 00:06:32.750 [2024-06-10 10:10:38.264249] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82c4b4a00 00:06:32.750 [2024-06-10 10:10:38.264280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.750 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:33.008 [2024-06-10 10:10:38.563657] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:33.008 [2024-06-10 10:10:38.563679] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:33.008 true 00:06:33.008 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:06:33.008 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:33.572 [2024-06-10 10:10:38.887721] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:33.572 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:06:33.572 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:06:33.572 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:06:33.572 10:10:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:33.830 [2024-06-10 10:10:39.231692] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:33.830 [2024-06-10 10:10:39.231726] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:33.830 [2024-06-10 10:10:39.231770] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:33.830 true 00:06:33.830 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:33.830 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:06:34.089 [2024-06-10 10:10:39.555707] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 49348 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@949 -- # '[' -z 49348 ']' 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # kill -0 49348 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # uname 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # ps -c -o command 49348 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # tail -1 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:06:34.089 killing process with pid 49348 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49348' 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # kill 49348 00:06:34.089 [2024-06-10 10:10:39.590046] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:34.089 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # wait 49348 00:06:34.089 [2024-06-10 10:10:39.590085] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:34.089 [2024-06-10 10:10:39.590098] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:34.089 [2024-06-10 10:10:39.590103] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c4b4a00 name Raid, state offline 00:06:34.089 [2024-06-10 10:10:39.590238] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:34.394 10:10:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:06:34.394 00:06:34.394 real 0m3.335s 00:06:34.394 user 0m5.184s 00:06:34.394 sys 0m0.743s 00:06:34.394 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.394 10:10:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.394 ************************************ 00:06:34.394 END TEST raid0_resize_test 00:06:34.394 ************************************ 00:06:34.394 10:10:39 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:06:34.394 10:10:39 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:06:34.394 10:10:39 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:34.394 10:10:39 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:34.394 10:10:39 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.394 10:10:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:34.394 ************************************ 00:06:34.394 START TEST raid_state_function_test 00:06:34.394 ************************************ 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 2 false 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49398 00:06:34.394 Process raid pid: 49398 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49398' 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49398 /var/tmp/spdk-raid.sock 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 49398 ']' 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:34.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:34.394 10:10:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.394 [2024-06-10 10:10:39.815298] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:34.394 [2024-06-10 10:10:39.815501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:34.962 EAL: TSC is not safe to use in SMP mode 00:06:34.962 EAL: TSC is not invariant 00:06:34.962 [2024-06-10 10:10:40.331601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.962 [2024-06-10 10:10:40.407815] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:34.962 [2024-06-10 10:10:40.409814] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.962 [2024-06-10 10:10:40.410475] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.962 [2024-06-10 10:10:40.410486] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:35.221 10:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:35.221 10:10:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:06:35.221 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:35.480 [2024-06-10 10:10:40.920653] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:35.480 [2024-06-10 10:10:40.920698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:35.480 [2024-06-10 10:10:40.920702] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:35.480 [2024-06-10 10:10:40.920708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:35.480 10:10:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.739 10:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:35.739 "name": "Existed_Raid", 00:06:35.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.739 "strip_size_kb": 64, 00:06:35.739 "state": "configuring", 00:06:35.739 "raid_level": "raid0", 00:06:35.739 "superblock": false, 00:06:35.739 "num_base_bdevs": 2, 00:06:35.739 "num_base_bdevs_discovered": 0, 00:06:35.739 "num_base_bdevs_operational": 2, 00:06:35.739 "base_bdevs_list": [ 00:06:35.739 { 00:06:35.739 "name": "BaseBdev1", 00:06:35.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.739 "is_configured": false, 00:06:35.739 "data_offset": 0, 00:06:35.739 "data_size": 0 00:06:35.739 }, 00:06:35.739 { 00:06:35.739 "name": "BaseBdev2", 00:06:35.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.739 "is_configured": false, 00:06:35.739 "data_offset": 0, 00:06:35.739 "data_size": 0 00:06:35.739 } 00:06:35.739 ] 00:06:35.739 }' 00:06:35.739 10:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:35.739 10:10:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.997 10:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:36.255 [2024-06-10 10:10:41.680660] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:36.255 [2024-06-10 10:10:41.680682] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6ac500 name Existed_Raid, state configuring 00:06:36.255 10:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:36.513 [2024-06-10 10:10:41.976683] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.513 [2024-06-10 10:10:41.976724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.513 [2024-06-10 10:10:41.976728] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.513 [2024-06-10 10:10:41.976751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.513 10:10:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:36.770 [2024-06-10 10:10:42.181593] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:36.770 BaseBdev1 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:36.770 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:37.026 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:37.283 [ 00:06:37.283 { 00:06:37.283 "name": "BaseBdev1", 00:06:37.283 "aliases": [ 00:06:37.283 "b1bf4f84-2711-11ef-b084-113036b5c18d" 00:06:37.283 ], 00:06:37.283 "product_name": "Malloc disk", 00:06:37.283 "block_size": 512, 00:06:37.283 "num_blocks": 65536, 00:06:37.283 "uuid": "b1bf4f84-2711-11ef-b084-113036b5c18d", 00:06:37.283 "assigned_rate_limits": { 00:06:37.283 "rw_ios_per_sec": 0, 00:06:37.283 "rw_mbytes_per_sec": 0, 00:06:37.283 "r_mbytes_per_sec": 0, 00:06:37.283 "w_mbytes_per_sec": 0 00:06:37.283 }, 00:06:37.283 "claimed": true, 00:06:37.283 "claim_type": "exclusive_write", 00:06:37.283 "zoned": false, 00:06:37.283 "supported_io_types": { 00:06:37.283 "read": true, 00:06:37.283 "write": true, 00:06:37.283 "unmap": true, 00:06:37.283 "write_zeroes": true, 00:06:37.283 "flush": true, 00:06:37.283 "reset": true, 00:06:37.283 "compare": false, 00:06:37.283 "compare_and_write": false, 00:06:37.283 "abort": true, 00:06:37.283 "nvme_admin": false, 00:06:37.283 "nvme_io": false 00:06:37.283 }, 00:06:37.283 "memory_domains": [ 00:06:37.283 { 00:06:37.283 "dma_device_id": "system", 00:06:37.283 "dma_device_type": 1 00:06:37.283 }, 00:06:37.283 { 00:06:37.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.283 "dma_device_type": 2 00:06:37.283 } 00:06:37.283 ], 00:06:37.283 "driver_specific": {} 00:06:37.283 } 00:06:37.283 ] 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:37.283 10:10:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.847 10:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:37.847 "name": "Existed_Raid", 00:06:37.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.847 "strip_size_kb": 64, 00:06:37.847 "state": "configuring", 00:06:37.847 "raid_level": "raid0", 00:06:37.847 "superblock": false, 00:06:37.847 "num_base_bdevs": 2, 00:06:37.847 "num_base_bdevs_discovered": 1, 00:06:37.847 "num_base_bdevs_operational": 2, 00:06:37.847 "base_bdevs_list": [ 00:06:37.847 { 00:06:37.847 "name": "BaseBdev1", 00:06:37.847 "uuid": "b1bf4f84-2711-11ef-b084-113036b5c18d", 00:06:37.847 "is_configured": true, 00:06:37.847 "data_offset": 0, 00:06:37.847 "data_size": 65536 00:06:37.847 }, 00:06:37.847 { 00:06:37.847 "name": "BaseBdev2", 00:06:37.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.847 "is_configured": false, 00:06:37.847 "data_offset": 0, 00:06:37.847 "data_size": 0 00:06:37.847 } 00:06:37.847 ] 00:06:37.847 }' 00:06:37.847 10:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:37.847 10:10:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.104 10:10:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:38.670 [2024-06-10 10:10:44.048773] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:38.670 [2024-06-10 10:10:44.048817] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6ac500 name Existed_Raid, state configuring 00:06:38.670 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:38.943 [2024-06-10 10:10:44.500800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:38.943 [2024-06-10 10:10:44.501553] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:38.943 [2024-06-10 10:10:44.501601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:38.943 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.507 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:39.507 "name": "Existed_Raid", 00:06:39.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.507 "strip_size_kb": 64, 00:06:39.507 "state": "configuring", 00:06:39.507 "raid_level": "raid0", 00:06:39.507 "superblock": false, 00:06:39.507 "num_base_bdevs": 2, 00:06:39.507 "num_base_bdevs_discovered": 1, 00:06:39.507 "num_base_bdevs_operational": 2, 00:06:39.507 "base_bdevs_list": [ 00:06:39.507 { 00:06:39.507 "name": "BaseBdev1", 00:06:39.507 "uuid": "b1bf4f84-2711-11ef-b084-113036b5c18d", 00:06:39.507 "is_configured": true, 00:06:39.507 "data_offset": 0, 00:06:39.507 "data_size": 65536 00:06:39.507 }, 00:06:39.507 { 00:06:39.507 "name": "BaseBdev2", 00:06:39.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.507 "is_configured": false, 00:06:39.507 "data_offset": 0, 00:06:39.507 "data_size": 0 00:06:39.507 } 00:06:39.507 ] 00:06:39.507 }' 00:06:39.507 10:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:39.507 10:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.763 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:40.018 [2024-06-10 10:10:45.480933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:40.018 [2024-06-10 10:10:45.480958] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b6aca00 00:06:40.018 [2024-06-10 10:10:45.480962] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:40.018 [2024-06-10 10:10:45.480979] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b70fec0 00:06:40.018 [2024-06-10 10:10:45.481054] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b6aca00 00:06:40.018 [2024-06-10 10:10:45.481058] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b6aca00 00:06:40.018 [2024-06-10 10:10:45.481084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.018 BaseBdev2 00:06:40.018 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:40.018 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:06:40.018 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:40.019 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:06:40.019 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:40.019 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:40.019 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:40.275 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:40.533 [ 00:06:40.533 { 00:06:40.533 "name": "BaseBdev2", 00:06:40.533 "aliases": [ 00:06:40.533 "b3b6deba-2711-11ef-b084-113036b5c18d" 00:06:40.533 ], 00:06:40.533 "product_name": "Malloc disk", 00:06:40.533 "block_size": 512, 00:06:40.533 "num_blocks": 65536, 00:06:40.533 "uuid": "b3b6deba-2711-11ef-b084-113036b5c18d", 00:06:40.533 "assigned_rate_limits": { 00:06:40.533 "rw_ios_per_sec": 0, 00:06:40.533 "rw_mbytes_per_sec": 0, 00:06:40.533 "r_mbytes_per_sec": 0, 00:06:40.533 "w_mbytes_per_sec": 0 00:06:40.533 }, 00:06:40.534 "claimed": true, 00:06:40.534 "claim_type": "exclusive_write", 00:06:40.534 "zoned": false, 00:06:40.534 "supported_io_types": { 00:06:40.534 "read": true, 00:06:40.534 "write": true, 00:06:40.534 "unmap": true, 00:06:40.534 "write_zeroes": true, 00:06:40.534 "flush": true, 00:06:40.534 "reset": true, 00:06:40.534 "compare": false, 00:06:40.534 "compare_and_write": false, 00:06:40.534 "abort": true, 00:06:40.534 "nvme_admin": false, 00:06:40.534 "nvme_io": false 00:06:40.534 }, 00:06:40.534 "memory_domains": [ 00:06:40.534 { 00:06:40.534 "dma_device_id": "system", 00:06:40.534 "dma_device_type": 1 00:06:40.534 }, 00:06:40.534 { 00:06:40.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.534 "dma_device_type": 2 00:06:40.534 } 00:06:40.534 ], 00:06:40.534 "driver_specific": {} 00:06:40.534 } 00:06:40.534 ] 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:40.534 10:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:40.792 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:40.792 "name": "Existed_Raid", 00:06:40.792 "uuid": "b3b6e43f-2711-11ef-b084-113036b5c18d", 00:06:40.792 "strip_size_kb": 64, 00:06:40.792 "state": "online", 00:06:40.792 "raid_level": "raid0", 00:06:40.792 "superblock": false, 00:06:40.792 "num_base_bdevs": 2, 00:06:40.792 "num_base_bdevs_discovered": 2, 00:06:40.792 "num_base_bdevs_operational": 2, 00:06:40.792 "base_bdevs_list": [ 00:06:40.792 { 00:06:40.792 "name": "BaseBdev1", 00:06:40.792 "uuid": "b1bf4f84-2711-11ef-b084-113036b5c18d", 00:06:40.792 "is_configured": true, 00:06:40.792 "data_offset": 0, 00:06:40.792 "data_size": 65536 00:06:40.792 }, 00:06:40.792 { 00:06:40.792 "name": "BaseBdev2", 00:06:40.792 "uuid": "b3b6deba-2711-11ef-b084-113036b5c18d", 00:06:40.792 "is_configured": true, 00:06:40.792 "data_offset": 0, 00:06:40.792 "data_size": 65536 00:06:40.792 } 00:06:40.792 ] 00:06:40.792 }' 00:06:40.792 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:40.792 10:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:41.051 [2024-06-10 10:10:46.612848] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.051 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:41.051 "name": "Existed_Raid", 00:06:41.051 "aliases": [ 00:06:41.051 "b3b6e43f-2711-11ef-b084-113036b5c18d" 00:06:41.051 ], 00:06:41.051 "product_name": "Raid Volume", 00:06:41.051 "block_size": 512, 00:06:41.051 "num_blocks": 131072, 00:06:41.051 "uuid": "b3b6e43f-2711-11ef-b084-113036b5c18d", 00:06:41.051 "assigned_rate_limits": { 00:06:41.051 "rw_ios_per_sec": 0, 00:06:41.051 "rw_mbytes_per_sec": 0, 00:06:41.051 "r_mbytes_per_sec": 0, 00:06:41.051 "w_mbytes_per_sec": 0 00:06:41.051 }, 00:06:41.051 "claimed": false, 00:06:41.051 "zoned": false, 00:06:41.051 "supported_io_types": { 00:06:41.051 "read": true, 00:06:41.051 "write": true, 00:06:41.051 "unmap": true, 00:06:41.051 "write_zeroes": true, 00:06:41.051 "flush": true, 00:06:41.051 "reset": true, 00:06:41.051 "compare": false, 00:06:41.051 "compare_and_write": false, 00:06:41.051 "abort": false, 00:06:41.051 "nvme_admin": false, 00:06:41.051 "nvme_io": false 00:06:41.052 }, 00:06:41.052 "memory_domains": [ 00:06:41.052 { 00:06:41.052 "dma_device_id": "system", 00:06:41.052 "dma_device_type": 1 00:06:41.052 }, 00:06:41.052 { 00:06:41.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.052 "dma_device_type": 2 00:06:41.052 }, 00:06:41.052 { 00:06:41.052 "dma_device_id": "system", 00:06:41.052 "dma_device_type": 1 00:06:41.052 }, 00:06:41.052 { 00:06:41.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.052 "dma_device_type": 2 00:06:41.052 } 00:06:41.052 ], 00:06:41.052 "driver_specific": { 00:06:41.052 "raid": { 00:06:41.052 "uuid": "b3b6e43f-2711-11ef-b084-113036b5c18d", 00:06:41.052 "strip_size_kb": 64, 00:06:41.052 "state": "online", 00:06:41.052 "raid_level": "raid0", 00:06:41.052 "superblock": false, 00:06:41.052 "num_base_bdevs": 2, 00:06:41.052 "num_base_bdevs_discovered": 2, 00:06:41.052 "num_base_bdevs_operational": 2, 00:06:41.052 "base_bdevs_list": [ 00:06:41.052 { 00:06:41.052 "name": "BaseBdev1", 00:06:41.052 "uuid": "b1bf4f84-2711-11ef-b084-113036b5c18d", 00:06:41.052 "is_configured": true, 00:06:41.052 "data_offset": 0, 00:06:41.052 "data_size": 65536 00:06:41.052 }, 00:06:41.052 { 00:06:41.052 "name": "BaseBdev2", 00:06:41.052 "uuid": "b3b6deba-2711-11ef-b084-113036b5c18d", 00:06:41.052 "is_configured": true, 00:06:41.052 "data_offset": 0, 00:06:41.052 "data_size": 65536 00:06:41.052 } 00:06:41.052 ] 00:06:41.052 } 00:06:41.052 } 00:06:41.052 }' 00:06:41.052 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:41.052 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:41.052 BaseBdev2' 00:06:41.052 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:41.052 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:41.052 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:41.310 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:41.310 "name": "BaseBdev1", 00:06:41.310 "aliases": [ 00:06:41.310 "b1bf4f84-2711-11ef-b084-113036b5c18d" 00:06:41.310 ], 00:06:41.310 "product_name": "Malloc disk", 00:06:41.310 "block_size": 512, 00:06:41.310 "num_blocks": 65536, 00:06:41.310 "uuid": "b1bf4f84-2711-11ef-b084-113036b5c18d", 00:06:41.310 "assigned_rate_limits": { 00:06:41.310 "rw_ios_per_sec": 0, 00:06:41.310 "rw_mbytes_per_sec": 0, 00:06:41.310 "r_mbytes_per_sec": 0, 00:06:41.310 "w_mbytes_per_sec": 0 00:06:41.310 }, 00:06:41.310 "claimed": true, 00:06:41.310 "claim_type": "exclusive_write", 00:06:41.310 "zoned": false, 00:06:41.310 "supported_io_types": { 00:06:41.310 "read": true, 00:06:41.310 "write": true, 00:06:41.310 "unmap": true, 00:06:41.310 "write_zeroes": true, 00:06:41.310 "flush": true, 00:06:41.310 "reset": true, 00:06:41.311 "compare": false, 00:06:41.311 "compare_and_write": false, 00:06:41.311 "abort": true, 00:06:41.311 "nvme_admin": false, 00:06:41.311 "nvme_io": false 00:06:41.311 }, 00:06:41.311 "memory_domains": [ 00:06:41.311 { 00:06:41.311 "dma_device_id": "system", 00:06:41.311 "dma_device_type": 1 00:06:41.311 }, 00:06:41.311 { 00:06:41.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.311 "dma_device_type": 2 00:06:41.311 } 00:06:41.311 ], 00:06:41.311 "driver_specific": {} 00:06:41.311 }' 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:41.311 10:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:41.569 "name": "BaseBdev2", 00:06:41.569 "aliases": [ 00:06:41.569 "b3b6deba-2711-11ef-b084-113036b5c18d" 00:06:41.569 ], 00:06:41.569 "product_name": "Malloc disk", 00:06:41.569 "block_size": 512, 00:06:41.569 "num_blocks": 65536, 00:06:41.569 "uuid": "b3b6deba-2711-11ef-b084-113036b5c18d", 00:06:41.569 "assigned_rate_limits": { 00:06:41.569 "rw_ios_per_sec": 0, 00:06:41.569 "rw_mbytes_per_sec": 0, 00:06:41.569 "r_mbytes_per_sec": 0, 00:06:41.569 "w_mbytes_per_sec": 0 00:06:41.569 }, 00:06:41.569 "claimed": true, 00:06:41.569 "claim_type": "exclusive_write", 00:06:41.569 "zoned": false, 00:06:41.569 "supported_io_types": { 00:06:41.569 "read": true, 00:06:41.569 "write": true, 00:06:41.569 "unmap": true, 00:06:41.569 "write_zeroes": true, 00:06:41.569 "flush": true, 00:06:41.569 "reset": true, 00:06:41.569 "compare": false, 00:06:41.569 "compare_and_write": false, 00:06:41.569 "abort": true, 00:06:41.569 "nvme_admin": false, 00:06:41.569 "nvme_io": false 00:06:41.569 }, 00:06:41.569 "memory_domains": [ 00:06:41.569 { 00:06:41.569 "dma_device_id": "system", 00:06:41.569 "dma_device_type": 1 00:06:41.569 }, 00:06:41.569 { 00:06:41.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.569 "dma_device_type": 2 00:06:41.569 } 00:06:41.569 ], 00:06:41.569 "driver_specific": {} 00:06:41.569 }' 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:41.569 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:41.874 [2024-06-10 10:10:47.324841] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:41.874 [2024-06-10 10:10:47.324859] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.874 [2024-06-10 10:10:47.324869] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:41.874 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.133 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:42.133 "name": "Existed_Raid", 00:06:42.133 "uuid": "b3b6e43f-2711-11ef-b084-113036b5c18d", 00:06:42.133 "strip_size_kb": 64, 00:06:42.133 "state": "offline", 00:06:42.133 "raid_level": "raid0", 00:06:42.133 "superblock": false, 00:06:42.133 "num_base_bdevs": 2, 00:06:42.133 "num_base_bdevs_discovered": 1, 00:06:42.133 "num_base_bdevs_operational": 1, 00:06:42.133 "base_bdevs_list": [ 00:06:42.133 { 00:06:42.133 "name": null, 00:06:42.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.133 "is_configured": false, 00:06:42.133 "data_offset": 0, 00:06:42.133 "data_size": 65536 00:06:42.133 }, 00:06:42.133 { 00:06:42.133 "name": "BaseBdev2", 00:06:42.133 "uuid": "b3b6deba-2711-11ef-b084-113036b5c18d", 00:06:42.133 "is_configured": true, 00:06:42.133 "data_offset": 0, 00:06:42.133 "data_size": 65536 00:06:42.133 } 00:06:42.133 ] 00:06:42.133 }' 00:06:42.133 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:42.133 10:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.391 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:42.391 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:42.391 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:42.391 10:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:42.650 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:42.650 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:42.650 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:42.650 [2024-06-10 10:10:48.233504] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:42.650 [2024-06-10 10:10:48.233525] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6aca00 name Existed_Raid, state offline 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49398 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 49398 ']' 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 49398 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 49398 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:06:42.910 killing process with pid 49398 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49398' 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 49398 00:06:42.910 [2024-06-10 10:10:48.459243] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.910 [2024-06-10 10:10:48.459277] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.910 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 49398 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:06:43.169 00:06:43.169 real 0m8.820s 00:06:43.169 user 0m15.536s 00:06:43.169 sys 0m1.376s 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.169 ************************************ 00:06:43.169 END TEST raid_state_function_test 00:06:43.169 ************************************ 00:06:43.169 10:10:48 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:43.169 10:10:48 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:06:43.169 10:10:48 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.169 10:10:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.169 ************************************ 00:06:43.169 START TEST raid_state_function_test_sb 00:06:43.169 ************************************ 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 2 true 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49673 00:06:43.169 Process raid pid: 49673 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49673' 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49673 /var/tmp/spdk-raid.sock 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 49673 ']' 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:43.169 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:43.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:43.170 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:43.170 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:43.170 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:43.170 10:10:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:43.170 [2024-06-10 10:10:48.686845] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:43.170 [2024-06-10 10:10:48.687098] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:43.738 EAL: TSC is not safe to use in SMP mode 00:06:43.738 EAL: TSC is not invariant 00:06:43.738 [2024-06-10 10:10:49.142481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.738 [2024-06-10 10:10:49.217987] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:43.738 [2024-06-10 10:10:49.219989] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.738 [2024-06-10 10:10:49.220618] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.738 [2024-06-10 10:10:49.220629] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.307 10:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:44.307 10:10:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:06:44.307 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:44.566 [2024-06-10 10:10:49.950681] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:44.566 [2024-06-10 10:10:49.950719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:44.566 [2024-06-10 10:10:49.950723] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:44.566 [2024-06-10 10:10:49.950729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:44.566 10:10:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.826 10:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:44.826 "name": "Existed_Raid", 00:06:44.826 "uuid": "b660ea4c-2711-11ef-b084-113036b5c18d", 00:06:44.826 "strip_size_kb": 64, 00:06:44.826 "state": "configuring", 00:06:44.826 "raid_level": "raid0", 00:06:44.826 "superblock": true, 00:06:44.826 "num_base_bdevs": 2, 00:06:44.826 "num_base_bdevs_discovered": 0, 00:06:44.826 "num_base_bdevs_operational": 2, 00:06:44.826 "base_bdevs_list": [ 00:06:44.826 { 00:06:44.826 "name": "BaseBdev1", 00:06:44.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.826 "is_configured": false, 00:06:44.826 "data_offset": 0, 00:06:44.826 "data_size": 0 00:06:44.826 }, 00:06:44.826 { 00:06:44.826 "name": "BaseBdev2", 00:06:44.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.826 "is_configured": false, 00:06:44.826 "data_offset": 0, 00:06:44.826 "data_size": 0 00:06:44.826 } 00:06:44.826 ] 00:06:44.826 }' 00:06:44.826 10:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:44.826 10:10:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.085 10:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:45.343 [2024-06-10 10:10:50.698683] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:45.343 [2024-06-10 10:10:50.698701] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb8e500 name Existed_Raid, state configuring 00:06:45.343 10:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:45.343 [2024-06-10 10:10:50.890700] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:45.343 [2024-06-10 10:10:50.890734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:45.343 [2024-06-10 10:10:50.890738] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:45.343 [2024-06-10 10:10:50.890745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:45.343 10:10:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:45.601 [2024-06-10 10:10:51.087547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:45.601 BaseBdev1 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:45.601 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:45.859 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:46.137 [ 00:06:46.137 { 00:06:46.137 "name": "BaseBdev1", 00:06:46.137 "aliases": [ 00:06:46.137 "b70e4257-2711-11ef-b084-113036b5c18d" 00:06:46.137 ], 00:06:46.137 "product_name": "Malloc disk", 00:06:46.137 "block_size": 512, 00:06:46.137 "num_blocks": 65536, 00:06:46.137 "uuid": "b70e4257-2711-11ef-b084-113036b5c18d", 00:06:46.137 "assigned_rate_limits": { 00:06:46.137 "rw_ios_per_sec": 0, 00:06:46.137 "rw_mbytes_per_sec": 0, 00:06:46.137 "r_mbytes_per_sec": 0, 00:06:46.137 "w_mbytes_per_sec": 0 00:06:46.137 }, 00:06:46.137 "claimed": true, 00:06:46.137 "claim_type": "exclusive_write", 00:06:46.137 "zoned": false, 00:06:46.137 "supported_io_types": { 00:06:46.137 "read": true, 00:06:46.137 "write": true, 00:06:46.137 "unmap": true, 00:06:46.137 "write_zeroes": true, 00:06:46.137 "flush": true, 00:06:46.137 "reset": true, 00:06:46.137 "compare": false, 00:06:46.137 "compare_and_write": false, 00:06:46.137 "abort": true, 00:06:46.137 "nvme_admin": false, 00:06:46.137 "nvme_io": false 00:06:46.137 }, 00:06:46.137 "memory_domains": [ 00:06:46.137 { 00:06:46.137 "dma_device_id": "system", 00:06:46.137 "dma_device_type": 1 00:06:46.137 }, 00:06:46.137 { 00:06:46.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.137 "dma_device_type": 2 00:06:46.137 } 00:06:46.137 ], 00:06:46.137 "driver_specific": {} 00:06:46.137 } 00:06:46.137 ] 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:46.137 "name": "Existed_Raid", 00:06:46.137 "uuid": "b6f059e0-2711-11ef-b084-113036b5c18d", 00:06:46.137 "strip_size_kb": 64, 00:06:46.137 "state": "configuring", 00:06:46.137 "raid_level": "raid0", 00:06:46.137 "superblock": true, 00:06:46.137 "num_base_bdevs": 2, 00:06:46.137 "num_base_bdevs_discovered": 1, 00:06:46.137 "num_base_bdevs_operational": 2, 00:06:46.137 "base_bdevs_list": [ 00:06:46.137 { 00:06:46.137 "name": "BaseBdev1", 00:06:46.137 "uuid": "b70e4257-2711-11ef-b084-113036b5c18d", 00:06:46.137 "is_configured": true, 00:06:46.137 "data_offset": 2048, 00:06:46.137 "data_size": 63488 00:06:46.137 }, 00:06:46.137 { 00:06:46.137 "name": "BaseBdev2", 00:06:46.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.137 "is_configured": false, 00:06:46.137 "data_offset": 0, 00:06:46.137 "data_size": 0 00:06:46.137 } 00:06:46.137 ] 00:06:46.137 }' 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:46.137 10:10:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.702 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:46.702 [2024-06-10 10:10:52.182736] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:46.702 [2024-06-10 10:10:52.182759] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb8e500 name Existed_Raid, state configuring 00:06:46.702 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:46.959 [2024-06-10 10:10:52.366749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:46.959 [2024-06-10 10:10:52.367405] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:46.959 [2024-06-10 10:10:52.367447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:46.959 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:46.959 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:46.959 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:46.959 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:46.960 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.216 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:47.216 "name": "Existed_Raid", 00:06:47.216 "uuid": "b7d193f6-2711-11ef-b084-113036b5c18d", 00:06:47.216 "strip_size_kb": 64, 00:06:47.216 "state": "configuring", 00:06:47.216 "raid_level": "raid0", 00:06:47.216 "superblock": true, 00:06:47.216 "num_base_bdevs": 2, 00:06:47.216 "num_base_bdevs_discovered": 1, 00:06:47.216 "num_base_bdevs_operational": 2, 00:06:47.216 "base_bdevs_list": [ 00:06:47.216 { 00:06:47.216 "name": "BaseBdev1", 00:06:47.216 "uuid": "b70e4257-2711-11ef-b084-113036b5c18d", 00:06:47.216 "is_configured": true, 00:06:47.216 "data_offset": 2048, 00:06:47.216 "data_size": 63488 00:06:47.216 }, 00:06:47.216 { 00:06:47.216 "name": "BaseBdev2", 00:06:47.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.216 "is_configured": false, 00:06:47.216 "data_offset": 0, 00:06:47.216 "data_size": 0 00:06:47.216 } 00:06:47.217 ] 00:06:47.217 }' 00:06:47.217 10:10:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:47.217 10:10:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:47.782 [2024-06-10 10:10:53.314884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.782 [2024-06-10 10:10:53.314949] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cb8ea00 00:06:47.782 [2024-06-10 10:10:53.314954] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:47.782 [2024-06-10 10:10:53.314972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cbf1ec0 00:06:47.782 [2024-06-10 10:10:53.315004] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cb8ea00 00:06:47.782 [2024-06-10 10:10:53.315008] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cb8ea00 00:06:47.782 [2024-06-10 10:10:53.315023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.782 BaseBdev2 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:06:47.782 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:48.040 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:48.297 [ 00:06:48.297 { 00:06:48.297 "name": "BaseBdev2", 00:06:48.297 "aliases": [ 00:06:48.297 "b8623cab-2711-11ef-b084-113036b5c18d" 00:06:48.297 ], 00:06:48.297 "product_name": "Malloc disk", 00:06:48.297 "block_size": 512, 00:06:48.297 "num_blocks": 65536, 00:06:48.297 "uuid": "b8623cab-2711-11ef-b084-113036b5c18d", 00:06:48.297 "assigned_rate_limits": { 00:06:48.297 "rw_ios_per_sec": 0, 00:06:48.297 "rw_mbytes_per_sec": 0, 00:06:48.297 "r_mbytes_per_sec": 0, 00:06:48.297 "w_mbytes_per_sec": 0 00:06:48.297 }, 00:06:48.297 "claimed": true, 00:06:48.297 "claim_type": "exclusive_write", 00:06:48.297 "zoned": false, 00:06:48.297 "supported_io_types": { 00:06:48.297 "read": true, 00:06:48.297 "write": true, 00:06:48.297 "unmap": true, 00:06:48.297 "write_zeroes": true, 00:06:48.297 "flush": true, 00:06:48.297 "reset": true, 00:06:48.297 "compare": false, 00:06:48.297 "compare_and_write": false, 00:06:48.297 "abort": true, 00:06:48.297 "nvme_admin": false, 00:06:48.297 "nvme_io": false 00:06:48.297 }, 00:06:48.297 "memory_domains": [ 00:06:48.297 { 00:06:48.297 "dma_device_id": "system", 00:06:48.297 "dma_device_type": 1 00:06:48.297 }, 00:06:48.297 { 00:06:48.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:48.297 "dma_device_type": 2 00:06:48.297 } 00:06:48.297 ], 00:06:48.297 "driver_specific": {} 00:06:48.297 } 00:06:48.297 ] 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:48.297 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:48.298 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:48.298 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:48.298 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.555 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:48.555 "name": "Existed_Raid", 00:06:48.555 "uuid": "b7d193f6-2711-11ef-b084-113036b5c18d", 00:06:48.555 "strip_size_kb": 64, 00:06:48.555 "state": "online", 00:06:48.555 "raid_level": "raid0", 00:06:48.555 "superblock": true, 00:06:48.555 "num_base_bdevs": 2, 00:06:48.555 "num_base_bdevs_discovered": 2, 00:06:48.555 "num_base_bdevs_operational": 2, 00:06:48.555 "base_bdevs_list": [ 00:06:48.555 { 00:06:48.555 "name": "BaseBdev1", 00:06:48.555 "uuid": "b70e4257-2711-11ef-b084-113036b5c18d", 00:06:48.555 "is_configured": true, 00:06:48.555 "data_offset": 2048, 00:06:48.555 "data_size": 63488 00:06:48.555 }, 00:06:48.555 { 00:06:48.555 "name": "BaseBdev2", 00:06:48.555 "uuid": "b8623cab-2711-11ef-b084-113036b5c18d", 00:06:48.555 "is_configured": true, 00:06:48.555 "data_offset": 2048, 00:06:48.555 "data_size": 63488 00:06:48.555 } 00:06:48.555 ] 00:06:48.555 }' 00:06:48.555 10:10:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:48.555 10:10:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:06:48.813 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:49.070 [2024-06-10 10:10:54.522821] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.070 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:49.070 "name": "Existed_Raid", 00:06:49.070 "aliases": [ 00:06:49.070 "b7d193f6-2711-11ef-b084-113036b5c18d" 00:06:49.070 ], 00:06:49.070 "product_name": "Raid Volume", 00:06:49.070 "block_size": 512, 00:06:49.070 "num_blocks": 126976, 00:06:49.070 "uuid": "b7d193f6-2711-11ef-b084-113036b5c18d", 00:06:49.070 "assigned_rate_limits": { 00:06:49.070 "rw_ios_per_sec": 0, 00:06:49.070 "rw_mbytes_per_sec": 0, 00:06:49.070 "r_mbytes_per_sec": 0, 00:06:49.070 "w_mbytes_per_sec": 0 00:06:49.070 }, 00:06:49.070 "claimed": false, 00:06:49.070 "zoned": false, 00:06:49.070 "supported_io_types": { 00:06:49.070 "read": true, 00:06:49.070 "write": true, 00:06:49.070 "unmap": true, 00:06:49.070 "write_zeroes": true, 00:06:49.070 "flush": true, 00:06:49.070 "reset": true, 00:06:49.070 "compare": false, 00:06:49.070 "compare_and_write": false, 00:06:49.070 "abort": false, 00:06:49.070 "nvme_admin": false, 00:06:49.070 "nvme_io": false 00:06:49.070 }, 00:06:49.070 "memory_domains": [ 00:06:49.070 { 00:06:49.070 "dma_device_id": "system", 00:06:49.070 "dma_device_type": 1 00:06:49.070 }, 00:06:49.070 { 00:06:49.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.070 "dma_device_type": 2 00:06:49.070 }, 00:06:49.070 { 00:06:49.070 "dma_device_id": "system", 00:06:49.070 "dma_device_type": 1 00:06:49.070 }, 00:06:49.070 { 00:06:49.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.070 "dma_device_type": 2 00:06:49.070 } 00:06:49.070 ], 00:06:49.070 "driver_specific": { 00:06:49.070 "raid": { 00:06:49.070 "uuid": "b7d193f6-2711-11ef-b084-113036b5c18d", 00:06:49.070 "strip_size_kb": 64, 00:06:49.070 "state": "online", 00:06:49.070 "raid_level": "raid0", 00:06:49.070 "superblock": true, 00:06:49.070 "num_base_bdevs": 2, 00:06:49.070 "num_base_bdevs_discovered": 2, 00:06:49.070 "num_base_bdevs_operational": 2, 00:06:49.070 "base_bdevs_list": [ 00:06:49.070 { 00:06:49.070 "name": "BaseBdev1", 00:06:49.070 "uuid": "b70e4257-2711-11ef-b084-113036b5c18d", 00:06:49.070 "is_configured": true, 00:06:49.070 "data_offset": 2048, 00:06:49.070 "data_size": 63488 00:06:49.070 }, 00:06:49.070 { 00:06:49.070 "name": "BaseBdev2", 00:06:49.070 "uuid": "b8623cab-2711-11ef-b084-113036b5c18d", 00:06:49.070 "is_configured": true, 00:06:49.071 "data_offset": 2048, 00:06:49.071 "data_size": 63488 00:06:49.071 } 00:06:49.071 ] 00:06:49.071 } 00:06:49.071 } 00:06:49.071 }' 00:06:49.071 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:49.071 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:06:49.071 BaseBdev2' 00:06:49.071 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:49.071 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:06:49.071 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:49.329 "name": "BaseBdev1", 00:06:49.329 "aliases": [ 00:06:49.329 "b70e4257-2711-11ef-b084-113036b5c18d" 00:06:49.329 ], 00:06:49.329 "product_name": "Malloc disk", 00:06:49.329 "block_size": 512, 00:06:49.329 "num_blocks": 65536, 00:06:49.329 "uuid": "b70e4257-2711-11ef-b084-113036b5c18d", 00:06:49.329 "assigned_rate_limits": { 00:06:49.329 "rw_ios_per_sec": 0, 00:06:49.329 "rw_mbytes_per_sec": 0, 00:06:49.329 "r_mbytes_per_sec": 0, 00:06:49.329 "w_mbytes_per_sec": 0 00:06:49.329 }, 00:06:49.329 "claimed": true, 00:06:49.329 "claim_type": "exclusive_write", 00:06:49.329 "zoned": false, 00:06:49.329 "supported_io_types": { 00:06:49.329 "read": true, 00:06:49.329 "write": true, 00:06:49.329 "unmap": true, 00:06:49.329 "write_zeroes": true, 00:06:49.329 "flush": true, 00:06:49.329 "reset": true, 00:06:49.329 "compare": false, 00:06:49.329 "compare_and_write": false, 00:06:49.329 "abort": true, 00:06:49.329 "nvme_admin": false, 00:06:49.329 "nvme_io": false 00:06:49.329 }, 00:06:49.329 "memory_domains": [ 00:06:49.329 { 00:06:49.329 "dma_device_id": "system", 00:06:49.329 "dma_device_type": 1 00:06:49.329 }, 00:06:49.329 { 00:06:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.329 "dma_device_type": 2 00:06:49.329 } 00:06:49.329 ], 00:06:49.329 "driver_specific": {} 00:06:49.329 }' 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:49.329 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:49.587 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:49.587 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:49.587 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:49.587 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:06:49.587 10:10:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:49.587 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:49.587 "name": "BaseBdev2", 00:06:49.587 "aliases": [ 00:06:49.587 "b8623cab-2711-11ef-b084-113036b5c18d" 00:06:49.587 ], 00:06:49.587 "product_name": "Malloc disk", 00:06:49.587 "block_size": 512, 00:06:49.587 "num_blocks": 65536, 00:06:49.587 "uuid": "b8623cab-2711-11ef-b084-113036b5c18d", 00:06:49.587 "assigned_rate_limits": { 00:06:49.587 "rw_ios_per_sec": 0, 00:06:49.587 "rw_mbytes_per_sec": 0, 00:06:49.587 "r_mbytes_per_sec": 0, 00:06:49.587 "w_mbytes_per_sec": 0 00:06:49.587 }, 00:06:49.587 "claimed": true, 00:06:49.587 "claim_type": "exclusive_write", 00:06:49.587 "zoned": false, 00:06:49.587 "supported_io_types": { 00:06:49.587 "read": true, 00:06:49.587 "write": true, 00:06:49.587 "unmap": true, 00:06:49.587 "write_zeroes": true, 00:06:49.587 "flush": true, 00:06:49.587 "reset": true, 00:06:49.587 "compare": false, 00:06:49.587 "compare_and_write": false, 00:06:49.587 "abort": true, 00:06:49.587 "nvme_admin": false, 00:06:49.587 "nvme_io": false 00:06:49.587 }, 00:06:49.587 "memory_domains": [ 00:06:49.587 { 00:06:49.587 "dma_device_id": "system", 00:06:49.587 "dma_device_type": 1 00:06:49.587 }, 00:06:49.587 { 00:06:49.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.587 "dma_device_type": 2 00:06:49.587 } 00:06:49.587 ], 00:06:49.587 "driver_specific": {} 00:06:49.587 }' 00:06:49.587 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:49.587 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:49.587 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:49.587 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:49.846 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:06:49.846 [2024-06-10 10:10:55.426819] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:49.846 [2024-06-10 10:10:55.426838] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.846 [2024-06-10 10:10:55.426849] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:06:50.104 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:50.105 "name": "Existed_Raid", 00:06:50.105 "uuid": "b7d193f6-2711-11ef-b084-113036b5c18d", 00:06:50.105 "strip_size_kb": 64, 00:06:50.105 "state": "offline", 00:06:50.105 "raid_level": "raid0", 00:06:50.105 "superblock": true, 00:06:50.105 "num_base_bdevs": 2, 00:06:50.105 "num_base_bdevs_discovered": 1, 00:06:50.105 "num_base_bdevs_operational": 1, 00:06:50.105 "base_bdevs_list": [ 00:06:50.105 { 00:06:50.105 "name": null, 00:06:50.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.105 "is_configured": false, 00:06:50.105 "data_offset": 2048, 00:06:50.105 "data_size": 63488 00:06:50.105 }, 00:06:50.105 { 00:06:50.105 "name": "BaseBdev2", 00:06:50.105 "uuid": "b8623cab-2711-11ef-b084-113036b5c18d", 00:06:50.105 "is_configured": true, 00:06:50.105 "data_offset": 2048, 00:06:50.105 "data_size": 63488 00:06:50.105 } 00:06:50.105 ] 00:06:50.105 }' 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:50.105 10:10:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:50.696 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:06:50.989 [2024-06-10 10:10:56.455631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:50.989 [2024-06-10 10:10:56.455667] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cb8ea00 name Existed_Raid, state offline 00:06:50.989 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:06:50.989 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:06:50.989 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:50.989 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49673 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 49673 ']' 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 49673 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 49673 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:06:51.248 killing process with pid 49673 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49673' 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 49673 00:06:51.248 [2024-06-10 10:10:56.690057] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.248 [2024-06-10 10:10:56.690095] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.248 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 49673 00:06:51.507 10:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:06:51.507 00:06:51.507 real 0m8.190s 00:06:51.507 user 0m14.263s 00:06:51.507 sys 0m1.383s 00:06:51.507 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.507 10:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:51.507 ************************************ 00:06:51.507 END TEST raid_state_function_test_sb 00:06:51.507 ************************************ 00:06:51.507 10:10:56 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:51.507 10:10:56 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:51.507 10:10:56 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.507 10:10:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.507 ************************************ 00:06:51.507 START TEST raid_superblock_test 00:06:51.507 ************************************ 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 2 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49943 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49943 /var/tmp/spdk-raid.sock 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 49943 ']' 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:51.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:51.507 10:10:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.507 [2024-06-10 10:10:56.925193] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:06:51.507 [2024-06-10 10:10:56.925477] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:52.074 EAL: TSC is not safe to use in SMP mode 00:06:52.074 EAL: TSC is not invariant 00:06:52.074 [2024-06-10 10:10:57.424874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.074 [2024-06-10 10:10:57.507483] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:52.074 [2024-06-10 10:10:57.509877] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.074 [2024-06-10 10:10:57.510710] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.074 [2024-06-10 10:10:57.510725] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:52.640 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:06:52.641 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:52.641 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:06:52.899 malloc1 00:06:52.899 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:53.157 [2024-06-10 10:10:58.733812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:53.157 [2024-06-10 10:10:58.733868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.157 [2024-06-10 10:10:58.733880] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa6a780 00:06:53.157 [2024-06-10 10:10:58.733888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.157 [2024-06-10 10:10:58.734699] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.157 [2024-06-10 10:10:58.734741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:53.157 pt1 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:53.416 10:10:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:06:53.674 malloc2 00:06:53.674 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:53.933 [2024-06-10 10:10:59.333821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:53.933 [2024-06-10 10:10:59.333874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.933 [2024-06-10 10:10:59.333886] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa6ac80 00:06:53.933 [2024-06-10 10:10:59.333893] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.933 [2024-06-10 10:10:59.334426] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.933 [2024-06-10 10:10:59.334459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:53.933 pt2 00:06:53.933 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:06:53.933 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:06:53.933 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:06:54.192 [2024-06-10 10:10:59.637832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:54.192 [2024-06-10 10:10:59.638297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:54.192 [2024-06-10 10:10:59.638349] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa6af00 00:06:54.192 [2024-06-10 10:10:59.638354] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:54.192 [2024-06-10 10:10:59.638386] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aacde20 00:06:54.192 [2024-06-10 10:10:59.638457] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa6af00 00:06:54.192 [2024-06-10 10:10:59.638466] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa6af00 00:06:54.192 [2024-06-10 10:10:59.638494] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:54.192 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:54.451 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:54.452 "name": "raid_bdev1", 00:06:54.452 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:06:54.452 "strip_size_kb": 64, 00:06:54.452 "state": "online", 00:06:54.452 "raid_level": "raid0", 00:06:54.452 "superblock": true, 00:06:54.452 "num_base_bdevs": 2, 00:06:54.452 "num_base_bdevs_discovered": 2, 00:06:54.452 "num_base_bdevs_operational": 2, 00:06:54.452 "base_bdevs_list": [ 00:06:54.452 { 00:06:54.452 "name": "pt1", 00:06:54.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:54.452 "is_configured": true, 00:06:54.452 "data_offset": 2048, 00:06:54.452 "data_size": 63488 00:06:54.452 }, 00:06:54.452 { 00:06:54.452 "name": "pt2", 00:06:54.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:54.452 "is_configured": true, 00:06:54.452 "data_offset": 2048, 00:06:54.452 "data_size": 63488 00:06:54.452 } 00:06:54.452 ] 00:06:54.452 }' 00:06:54.452 10:10:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:54.452 10:10:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:06:55.020 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:55.021 [2024-06-10 10:11:00.553874] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.021 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:06:55.021 "name": "raid_bdev1", 00:06:55.021 "aliases": [ 00:06:55.021 "bc270e70-2711-11ef-b084-113036b5c18d" 00:06:55.021 ], 00:06:55.021 "product_name": "Raid Volume", 00:06:55.021 "block_size": 512, 00:06:55.021 "num_blocks": 126976, 00:06:55.021 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:06:55.021 "assigned_rate_limits": { 00:06:55.021 "rw_ios_per_sec": 0, 00:06:55.021 "rw_mbytes_per_sec": 0, 00:06:55.021 "r_mbytes_per_sec": 0, 00:06:55.021 "w_mbytes_per_sec": 0 00:06:55.021 }, 00:06:55.021 "claimed": false, 00:06:55.021 "zoned": false, 00:06:55.021 "supported_io_types": { 00:06:55.021 "read": true, 00:06:55.021 "write": true, 00:06:55.021 "unmap": true, 00:06:55.021 "write_zeroes": true, 00:06:55.021 "flush": true, 00:06:55.021 "reset": true, 00:06:55.021 "compare": false, 00:06:55.021 "compare_and_write": false, 00:06:55.021 "abort": false, 00:06:55.021 "nvme_admin": false, 00:06:55.021 "nvme_io": false 00:06:55.021 }, 00:06:55.021 "memory_domains": [ 00:06:55.021 { 00:06:55.021 "dma_device_id": "system", 00:06:55.021 "dma_device_type": 1 00:06:55.021 }, 00:06:55.021 { 00:06:55.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.021 "dma_device_type": 2 00:06:55.021 }, 00:06:55.021 { 00:06:55.021 "dma_device_id": "system", 00:06:55.021 "dma_device_type": 1 00:06:55.021 }, 00:06:55.021 { 00:06:55.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.021 "dma_device_type": 2 00:06:55.021 } 00:06:55.021 ], 00:06:55.021 "driver_specific": { 00:06:55.021 "raid": { 00:06:55.021 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:06:55.021 "strip_size_kb": 64, 00:06:55.021 "state": "online", 00:06:55.021 "raid_level": "raid0", 00:06:55.021 "superblock": true, 00:06:55.021 "num_base_bdevs": 2, 00:06:55.021 "num_base_bdevs_discovered": 2, 00:06:55.021 "num_base_bdevs_operational": 2, 00:06:55.021 "base_bdevs_list": [ 00:06:55.021 { 00:06:55.021 "name": "pt1", 00:06:55.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:55.021 "is_configured": true, 00:06:55.021 "data_offset": 2048, 00:06:55.021 "data_size": 63488 00:06:55.021 }, 00:06:55.021 { 00:06:55.021 "name": "pt2", 00:06:55.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:55.021 "is_configured": true, 00:06:55.021 "data_offset": 2048, 00:06:55.021 "data_size": 63488 00:06:55.021 } 00:06:55.021 ] 00:06:55.021 } 00:06:55.021 } 00:06:55.021 }' 00:06:55.021 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:55.021 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:06:55.021 pt2' 00:06:55.021 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:55.021 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:06:55.021 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:55.280 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:55.280 "name": "pt1", 00:06:55.280 "aliases": [ 00:06:55.280 "00000000-0000-0000-0000-000000000001" 00:06:55.280 ], 00:06:55.280 "product_name": "passthru", 00:06:55.280 "block_size": 512, 00:06:55.280 "num_blocks": 65536, 00:06:55.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:55.280 "assigned_rate_limits": { 00:06:55.280 "rw_ios_per_sec": 0, 00:06:55.280 "rw_mbytes_per_sec": 0, 00:06:55.280 "r_mbytes_per_sec": 0, 00:06:55.280 "w_mbytes_per_sec": 0 00:06:55.280 }, 00:06:55.280 "claimed": true, 00:06:55.280 "claim_type": "exclusive_write", 00:06:55.280 "zoned": false, 00:06:55.280 "supported_io_types": { 00:06:55.280 "read": true, 00:06:55.280 "write": true, 00:06:55.280 "unmap": true, 00:06:55.280 "write_zeroes": true, 00:06:55.280 "flush": true, 00:06:55.280 "reset": true, 00:06:55.280 "compare": false, 00:06:55.280 "compare_and_write": false, 00:06:55.280 "abort": true, 00:06:55.280 "nvme_admin": false, 00:06:55.280 "nvme_io": false 00:06:55.280 }, 00:06:55.280 "memory_domains": [ 00:06:55.280 { 00:06:55.280 "dma_device_id": "system", 00:06:55.280 "dma_device_type": 1 00:06:55.280 }, 00:06:55.280 { 00:06:55.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.280 "dma_device_type": 2 00:06:55.280 } 00:06:55.280 ], 00:06:55.280 "driver_specific": { 00:06:55.280 "passthru": { 00:06:55.280 "name": "pt1", 00:06:55.280 "base_bdev_name": "malloc1" 00:06:55.280 } 00:06:55.280 } 00:06:55.280 }' 00:06:55.280 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.280 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.280 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:55.280 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:06:55.539 10:11:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:06:55.799 "name": "pt2", 00:06:55.799 "aliases": [ 00:06:55.799 "00000000-0000-0000-0000-000000000002" 00:06:55.799 ], 00:06:55.799 "product_name": "passthru", 00:06:55.799 "block_size": 512, 00:06:55.799 "num_blocks": 65536, 00:06:55.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:55.799 "assigned_rate_limits": { 00:06:55.799 "rw_ios_per_sec": 0, 00:06:55.799 "rw_mbytes_per_sec": 0, 00:06:55.799 "r_mbytes_per_sec": 0, 00:06:55.799 "w_mbytes_per_sec": 0 00:06:55.799 }, 00:06:55.799 "claimed": true, 00:06:55.799 "claim_type": "exclusive_write", 00:06:55.799 "zoned": false, 00:06:55.799 "supported_io_types": { 00:06:55.799 "read": true, 00:06:55.799 "write": true, 00:06:55.799 "unmap": true, 00:06:55.799 "write_zeroes": true, 00:06:55.799 "flush": true, 00:06:55.799 "reset": true, 00:06:55.799 "compare": false, 00:06:55.799 "compare_and_write": false, 00:06:55.799 "abort": true, 00:06:55.799 "nvme_admin": false, 00:06:55.799 "nvme_io": false 00:06:55.799 }, 00:06:55.799 "memory_domains": [ 00:06:55.799 { 00:06:55.799 "dma_device_id": "system", 00:06:55.799 "dma_device_type": 1 00:06:55.799 }, 00:06:55.799 { 00:06:55.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.799 "dma_device_type": 2 00:06:55.799 } 00:06:55.799 ], 00:06:55.799 "driver_specific": { 00:06:55.799 "passthru": { 00:06:55.799 "name": "pt2", 00:06:55.799 "base_bdev_name": "malloc2" 00:06:55.799 } 00:06:55.799 } 00:06:55.799 }' 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:06:55.799 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:06:56.368 [2024-06-10 10:11:01.721893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.368 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=bc270e70-2711-11ef-b084-113036b5c18d 00:06:56.368 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z bc270e70-2711-11ef-b084-113036b5c18d ']' 00:06:56.368 10:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:06:56.627 [2024-06-10 10:11:02.025859] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:56.627 [2024-06-10 10:11:02.025883] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.627 [2024-06-10 10:11:02.025902] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.627 [2024-06-10 10:11:02.025929] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.627 [2024-06-10 10:11:02.025933] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa6af00 name raid_bdev1, state offline 00:06:56.627 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:56.627 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:06:56.886 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:06:56.886 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:06:56.886 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:06:56.886 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:06:57.144 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:06:57.144 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:06:57.403 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:57.403 10:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.661 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:06:57.920 [2024-06-10 10:11:03.373881] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:57.920 [2024-06-10 10:11:03.374344] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:57.920 [2024-06-10 10:11:03.374362] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:57.920 [2024-06-10 10:11:03.374414] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:57.920 [2024-06-10 10:11:03.374424] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:57.920 [2024-06-10 10:11:03.374428] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa6ac80 name raid_bdev1, state configuring 00:06:57.920 request: 00:06:57.920 { 00:06:57.920 "name": "raid_bdev1", 00:06:57.920 "raid_level": "raid0", 00:06:57.920 "base_bdevs": [ 00:06:57.920 "malloc1", 00:06:57.920 "malloc2" 00:06:57.920 ], 00:06:57.920 "superblock": false, 00:06:57.920 "strip_size_kb": 64, 00:06:57.920 "method": "bdev_raid_create", 00:06:57.920 "req_id": 1 00:06:57.920 } 00:06:57.920 Got JSON-RPC error response 00:06:57.920 response: 00:06:57.920 { 00:06:57.920 "code": -17, 00:06:57.920 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:57.920 } 00:06:57.920 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:06:57.920 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:57.920 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:57.920 10:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:57.920 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.920 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:06:58.178 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:06:58.178 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:06:58.178 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:58.436 [2024-06-10 10:11:03.869878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:58.436 [2024-06-10 10:11:03.869929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.436 [2024-06-10 10:11:03.869939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa6a780 00:06:58.437 [2024-06-10 10:11:03.869947] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.437 [2024-06-10 10:11:03.870447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.437 [2024-06-10 10:11:03.870476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:58.437 [2024-06-10 10:11:03.870651] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:58.437 [2024-06-10 10:11:03.870683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:58.437 pt1 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.437 10:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:58.695 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:58.695 "name": "raid_bdev1", 00:06:58.695 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:06:58.695 "strip_size_kb": 64, 00:06:58.695 "state": "configuring", 00:06:58.695 "raid_level": "raid0", 00:06:58.695 "superblock": true, 00:06:58.695 "num_base_bdevs": 2, 00:06:58.695 "num_base_bdevs_discovered": 1, 00:06:58.695 "num_base_bdevs_operational": 2, 00:06:58.695 "base_bdevs_list": [ 00:06:58.695 { 00:06:58.695 "name": "pt1", 00:06:58.695 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.695 "is_configured": true, 00:06:58.695 "data_offset": 2048, 00:06:58.695 "data_size": 63488 00:06:58.695 }, 00:06:58.695 { 00:06:58.695 "name": null, 00:06:58.695 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.695 "is_configured": false, 00:06:58.695 "data_offset": 2048, 00:06:58.695 "data_size": 63488 00:06:58.695 } 00:06:58.695 ] 00:06:58.695 }' 00:06:58.695 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:58.695 10:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.262 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:06:59.262 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:06:59.262 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:06:59.262 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:59.521 [2024-06-10 10:11:04.877909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:59.521 [2024-06-10 10:11:04.877966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.521 [2024-06-10 10:11:04.877978] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82aa6af00 00:06:59.521 [2024-06-10 10:11:04.877986] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.521 [2024-06-10 10:11:04.878085] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.521 [2024-06-10 10:11:04.878094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:59.521 [2024-06-10 10:11:04.878114] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:59.521 [2024-06-10 10:11:04.878121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:59.521 [2024-06-10 10:11:04.878149] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aa6b180 00:06:59.521 [2024-06-10 10:11:04.878153] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:59.521 [2024-06-10 10:11:04.878172] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aacde20 00:06:59.521 [2024-06-10 10:11:04.878209] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aa6b180 00:06:59.521 [2024-06-10 10:11:04.878213] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82aa6b180 00:06:59.521 [2024-06-10 10:11:04.878231] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.521 pt2 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:59.521 10:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.780 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:59.780 "name": "raid_bdev1", 00:06:59.780 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:06:59.780 "strip_size_kb": 64, 00:06:59.780 "state": "online", 00:06:59.780 "raid_level": "raid0", 00:06:59.780 "superblock": true, 00:06:59.780 "num_base_bdevs": 2, 00:06:59.780 "num_base_bdevs_discovered": 2, 00:06:59.780 "num_base_bdevs_operational": 2, 00:06:59.780 "base_bdevs_list": [ 00:06:59.780 { 00:06:59.780 "name": "pt1", 00:06:59.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:59.780 "is_configured": true, 00:06:59.780 "data_offset": 2048, 00:06:59.780 "data_size": 63488 00:06:59.780 }, 00:06:59.780 { 00:06:59.780 "name": "pt2", 00:06:59.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:59.780 "is_configured": true, 00:06:59.780 "data_offset": 2048, 00:06:59.780 "data_size": 63488 00:06:59.780 } 00:06:59.780 ] 00:06:59.780 }' 00:06:59.780 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:59.780 10:11:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:00.038 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:00.297 [2024-06-10 10:11:05.785946] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.297 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:00.297 "name": "raid_bdev1", 00:07:00.297 "aliases": [ 00:07:00.297 "bc270e70-2711-11ef-b084-113036b5c18d" 00:07:00.297 ], 00:07:00.297 "product_name": "Raid Volume", 00:07:00.297 "block_size": 512, 00:07:00.297 "num_blocks": 126976, 00:07:00.297 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:07:00.297 "assigned_rate_limits": { 00:07:00.297 "rw_ios_per_sec": 0, 00:07:00.297 "rw_mbytes_per_sec": 0, 00:07:00.297 "r_mbytes_per_sec": 0, 00:07:00.297 "w_mbytes_per_sec": 0 00:07:00.297 }, 00:07:00.297 "claimed": false, 00:07:00.297 "zoned": false, 00:07:00.297 "supported_io_types": { 00:07:00.297 "read": true, 00:07:00.297 "write": true, 00:07:00.297 "unmap": true, 00:07:00.297 "write_zeroes": true, 00:07:00.297 "flush": true, 00:07:00.297 "reset": true, 00:07:00.297 "compare": false, 00:07:00.297 "compare_and_write": false, 00:07:00.297 "abort": false, 00:07:00.297 "nvme_admin": false, 00:07:00.297 "nvme_io": false 00:07:00.297 }, 00:07:00.297 "memory_domains": [ 00:07:00.297 { 00:07:00.297 "dma_device_id": "system", 00:07:00.297 "dma_device_type": 1 00:07:00.297 }, 00:07:00.297 { 00:07:00.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.297 "dma_device_type": 2 00:07:00.297 }, 00:07:00.297 { 00:07:00.297 "dma_device_id": "system", 00:07:00.297 "dma_device_type": 1 00:07:00.297 }, 00:07:00.297 { 00:07:00.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.297 "dma_device_type": 2 00:07:00.297 } 00:07:00.297 ], 00:07:00.297 "driver_specific": { 00:07:00.297 "raid": { 00:07:00.297 "uuid": "bc270e70-2711-11ef-b084-113036b5c18d", 00:07:00.297 "strip_size_kb": 64, 00:07:00.297 "state": "online", 00:07:00.297 "raid_level": "raid0", 00:07:00.297 "superblock": true, 00:07:00.297 "num_base_bdevs": 2, 00:07:00.297 "num_base_bdevs_discovered": 2, 00:07:00.297 "num_base_bdevs_operational": 2, 00:07:00.297 "base_bdevs_list": [ 00:07:00.297 { 00:07:00.297 "name": "pt1", 00:07:00.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.297 "is_configured": true, 00:07:00.297 "data_offset": 2048, 00:07:00.297 "data_size": 63488 00:07:00.297 }, 00:07:00.297 { 00:07:00.297 "name": "pt2", 00:07:00.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.297 "is_configured": true, 00:07:00.297 "data_offset": 2048, 00:07:00.297 "data_size": 63488 00:07:00.297 } 00:07:00.297 ] 00:07:00.297 } 00:07:00.297 } 00:07:00.297 }' 00:07:00.297 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.297 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:00.297 pt2' 00:07:00.297 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:00.297 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:00.297 10:11:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:00.556 "name": "pt1", 00:07:00.556 "aliases": [ 00:07:00.556 "00000000-0000-0000-0000-000000000001" 00:07:00.556 ], 00:07:00.556 "product_name": "passthru", 00:07:00.556 "block_size": 512, 00:07:00.556 "num_blocks": 65536, 00:07:00.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.556 "assigned_rate_limits": { 00:07:00.556 "rw_ios_per_sec": 0, 00:07:00.556 "rw_mbytes_per_sec": 0, 00:07:00.556 "r_mbytes_per_sec": 0, 00:07:00.556 "w_mbytes_per_sec": 0 00:07:00.556 }, 00:07:00.556 "claimed": true, 00:07:00.556 "claim_type": "exclusive_write", 00:07:00.556 "zoned": false, 00:07:00.556 "supported_io_types": { 00:07:00.556 "read": true, 00:07:00.556 "write": true, 00:07:00.556 "unmap": true, 00:07:00.556 "write_zeroes": true, 00:07:00.556 "flush": true, 00:07:00.556 "reset": true, 00:07:00.556 "compare": false, 00:07:00.556 "compare_and_write": false, 00:07:00.556 "abort": true, 00:07:00.556 "nvme_admin": false, 00:07:00.556 "nvme_io": false 00:07:00.556 }, 00:07:00.556 "memory_domains": [ 00:07:00.556 { 00:07:00.556 "dma_device_id": "system", 00:07:00.556 "dma_device_type": 1 00:07:00.556 }, 00:07:00.556 { 00:07:00.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.556 "dma_device_type": 2 00:07:00.556 } 00:07:00.556 ], 00:07:00.556 "driver_specific": { 00:07:00.556 "passthru": { 00:07:00.556 "name": "pt1", 00:07:00.556 "base_bdev_name": "malloc1" 00:07:00.556 } 00:07:00.556 } 00:07:00.556 }' 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:00.556 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:00.815 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:00.815 "name": "pt2", 00:07:00.815 "aliases": [ 00:07:00.815 "00000000-0000-0000-0000-000000000002" 00:07:00.815 ], 00:07:00.815 "product_name": "passthru", 00:07:00.815 "block_size": 512, 00:07:00.815 "num_blocks": 65536, 00:07:00.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.815 "assigned_rate_limits": { 00:07:00.815 "rw_ios_per_sec": 0, 00:07:00.815 "rw_mbytes_per_sec": 0, 00:07:00.815 "r_mbytes_per_sec": 0, 00:07:00.815 "w_mbytes_per_sec": 0 00:07:00.815 }, 00:07:00.815 "claimed": true, 00:07:00.815 "claim_type": "exclusive_write", 00:07:00.815 "zoned": false, 00:07:00.815 "supported_io_types": { 00:07:00.815 "read": true, 00:07:00.815 "write": true, 00:07:00.815 "unmap": true, 00:07:00.815 "write_zeroes": true, 00:07:00.815 "flush": true, 00:07:00.815 "reset": true, 00:07:00.815 "compare": false, 00:07:00.815 "compare_and_write": false, 00:07:00.815 "abort": true, 00:07:00.815 "nvme_admin": false, 00:07:00.815 "nvme_io": false 00:07:00.815 }, 00:07:00.815 "memory_domains": [ 00:07:00.815 { 00:07:00.815 "dma_device_id": "system", 00:07:00.815 "dma_device_type": 1 00:07:00.815 }, 00:07:00.815 { 00:07:00.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.815 "dma_device_type": 2 00:07:00.815 } 00:07:00.815 ], 00:07:00.815 "driver_specific": { 00:07:00.815 "passthru": { 00:07:00.815 "name": "pt2", 00:07:00.815 "base_bdev_name": "malloc2" 00:07:00.815 } 00:07:00.815 } 00:07:00.815 }' 00:07:00.815 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:00.815 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:01.074 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:01.333 [2024-06-10 10:11:06.761982] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' bc270e70-2711-11ef-b084-113036b5c18d '!=' bc270e70-2711-11ef-b084-113036b5c18d ']' 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49943 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 49943 ']' 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 49943 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 49943 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:07:01.333 killing process with pid 49943 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 49943' 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 49943 00:07:01.333 [2024-06-10 10:11:06.797019] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.333 [2024-06-10 10:11:06.797051] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.333 [2024-06-10 10:11:06.797062] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.333 [2024-06-10 10:11:06.797066] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aa6b180 name raid_bdev1, state offline 00:07:01.333 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 49943 00:07:01.333 [2024-06-10 10:11:06.806751] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.592 10:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:01.592 00:07:01.592 real 0m10.060s 00:07:01.592 user 0m17.919s 00:07:01.592 sys 0m1.473s 00:07:01.592 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.592 10:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.592 ************************************ 00:07:01.592 END TEST raid_superblock_test 00:07:01.592 ************************************ 00:07:01.592 10:11:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:01.592 10:11:07 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:01.592 10:11:07 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.592 10:11:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.592 ************************************ 00:07:01.592 START TEST raid_read_error_test 00:07:01.592 ************************************ 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 2 read 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.JCxZb8jL 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50212 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50212 /var/tmp/spdk-raid.sock 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 50212 ']' 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:01.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:01.592 10:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.592 [2024-06-10 10:11:07.035980] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:01.592 [2024-06-10 10:11:07.036141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:02.159 EAL: TSC is not safe to use in SMP mode 00:07:02.159 EAL: TSC is not invariant 00:07:02.159 [2024-06-10 10:11:07.524020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.159 [2024-06-10 10:11:07.619936] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:02.159 [2024-06-10 10:11:07.622543] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.159 [2024-06-10 10:11:07.623439] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.159 [2024-06-10 10:11:07.623454] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.726 10:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:02.726 10:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:07:02.726 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:02.726 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:02.726 BaseBdev1_malloc 00:07:02.726 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:02.986 true 00:07:02.986 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:03.245 [2024-06-10 10:11:08.691904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:03.245 [2024-06-10 10:11:08.691975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.245 [2024-06-10 10:11:08.692009] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829fc4780 00:07:03.245 [2024-06-10 10:11:08.692025] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.245 [2024-06-10 10:11:08.692551] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.245 [2024-06-10 10:11:08.692591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:03.245 BaseBdev1 00:07:03.245 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:03.245 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:03.552 BaseBdev2_malloc 00:07:03.552 10:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:03.811 true 00:07:03.811 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:04.069 [2024-06-10 10:11:09.431946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:04.069 [2024-06-10 10:11:09.431997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.069 [2024-06-10 10:11:09.432020] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829fc4c80 00:07:04.069 [2024-06-10 10:11:09.432043] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.069 [2024-06-10 10:11:09.432582] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.069 [2024-06-10 10:11:09.432626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:04.069 BaseBdev2 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:04.069 [2024-06-10 10:11:09.627978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.069 [2024-06-10 10:11:09.628458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.069 [2024-06-10 10:11:09.628542] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x829fc4f00 00:07:04.069 [2024-06-10 10:11:09.628551] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.069 [2024-06-10 10:11:09.628588] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a030e20 00:07:04.069 [2024-06-10 10:11:09.628651] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829fc4f00 00:07:04.069 [2024-06-10 10:11:09.628659] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829fc4f00 00:07:04.069 [2024-06-10 10:11:09.628687] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:04.069 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:04.327 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:04.327 "name": "raid_bdev1", 00:07:04.327 "uuid": "c21b6e83-2711-11ef-b084-113036b5c18d", 00:07:04.327 "strip_size_kb": 64, 00:07:04.327 "state": "online", 00:07:04.327 "raid_level": "raid0", 00:07:04.327 "superblock": true, 00:07:04.327 "num_base_bdevs": 2, 00:07:04.327 "num_base_bdevs_discovered": 2, 00:07:04.327 "num_base_bdevs_operational": 2, 00:07:04.327 "base_bdevs_list": [ 00:07:04.327 { 00:07:04.327 "name": "BaseBdev1", 00:07:04.327 "uuid": "9fde45a8-3c63-4f54-b6d7-a9523142a7b7", 00:07:04.327 "is_configured": true, 00:07:04.327 "data_offset": 2048, 00:07:04.327 "data_size": 63488 00:07:04.327 }, 00:07:04.327 { 00:07:04.327 "name": "BaseBdev2", 00:07:04.327 "uuid": "1b1a33d6-da30-1d50-a124-4c497a70fca5", 00:07:04.327 "is_configured": true, 00:07:04.327 "data_offset": 2048, 00:07:04.327 "data_size": 63488 00:07:04.327 } 00:07:04.327 ] 00:07:04.327 }' 00:07:04.327 10:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:04.327 10:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.585 10:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:04.585 10:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:04.843 [2024-06-10 10:11:10.244056] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a030ec0 00:07:05.777 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:06.035 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.294 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:06.294 "name": "raid_bdev1", 00:07:06.294 "uuid": "c21b6e83-2711-11ef-b084-113036b5c18d", 00:07:06.294 "strip_size_kb": 64, 00:07:06.294 "state": "online", 00:07:06.294 "raid_level": "raid0", 00:07:06.294 "superblock": true, 00:07:06.294 "num_base_bdevs": 2, 00:07:06.294 "num_base_bdevs_discovered": 2, 00:07:06.294 "num_base_bdevs_operational": 2, 00:07:06.294 "base_bdevs_list": [ 00:07:06.294 { 00:07:06.294 "name": "BaseBdev1", 00:07:06.294 "uuid": "9fde45a8-3c63-4f54-b6d7-a9523142a7b7", 00:07:06.294 "is_configured": true, 00:07:06.294 "data_offset": 2048, 00:07:06.294 "data_size": 63488 00:07:06.294 }, 00:07:06.294 { 00:07:06.294 "name": "BaseBdev2", 00:07:06.294 "uuid": "1b1a33d6-da30-1d50-a124-4c497a70fca5", 00:07:06.294 "is_configured": true, 00:07:06.294 "data_offset": 2048, 00:07:06.294 "data_size": 63488 00:07:06.294 } 00:07:06.294 ] 00:07:06.294 }' 00:07:06.294 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:06.294 10:11:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.552 10:11:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:06.810 [2024-06-10 10:11:12.245292] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:06.810 [2024-06-10 10:11:12.245328] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.810 [2024-06-10 10:11:12.245820] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.810 [2024-06-10 10:11:12.245834] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.810 [2024-06-10 10:11:12.245846] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.810 [2024-06-10 10:11:12.245855] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829fc4f00 name raid_bdev1, state offline 00:07:06.810 0 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50212 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 50212 ']' 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 50212 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 50212 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:07:06.810 killing process with pid 50212 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 50212' 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 50212 00:07:06.810 [2024-06-10 10:11:12.275719] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.810 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 50212 00:07:06.810 [2024-06-10 10:11:12.285534] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.JCxZb8jL 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:07:07.069 00:07:07.069 real 0m5.446s 00:07:07.069 user 0m8.260s 00:07:07.069 sys 0m0.944s 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.069 ************************************ 00:07:07.069 END TEST raid_read_error_test 00:07:07.069 ************************************ 00:07:07.069 10:11:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.069 10:11:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:07.069 10:11:12 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:07.069 10:11:12 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.069 10:11:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.069 ************************************ 00:07:07.069 START TEST raid_write_error_test 00:07:07.069 ************************************ 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 2 write 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZiQscWEO 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50336 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50336 /var/tmp/spdk-raid.sock 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 50336 ']' 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:07.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:07.069 10:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.069 [2024-06-10 10:11:12.527665] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:07.069 [2024-06-10 10:11:12.527820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:07.636 EAL: TSC is not safe to use in SMP mode 00:07:07.636 EAL: TSC is not invariant 00:07:07.636 [2024-06-10 10:11:12.992910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.636 [2024-06-10 10:11:13.086436] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:07.636 [2024-06-10 10:11:13.089033] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.636 [2024-06-10 10:11:13.090024] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.636 [2024-06-10 10:11:13.090039] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.201 10:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:08.201 10:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:07:08.201 10:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:08.201 10:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:08.460 BaseBdev1_malloc 00:07:08.460 10:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:08.719 true 00:07:08.719 10:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:08.978 [2024-06-10 10:11:14.446243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:08.978 [2024-06-10 10:11:14.446300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.978 [2024-06-10 10:11:14.446327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e2ee780 00:07:08.978 [2024-06-10 10:11:14.446335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.978 [2024-06-10 10:11:14.446799] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.978 [2024-06-10 10:11:14.446822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:08.978 BaseBdev1 00:07:08.978 10:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:08.978 10:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:09.236 BaseBdev2_malloc 00:07:09.236 10:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:09.494 true 00:07:09.494 10:11:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.753 [2024-06-10 10:11:15.118249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.753 [2024-06-10 10:11:15.118291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.753 [2024-06-10 10:11:15.118327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e2eec80 00:07:09.753 [2024-06-10 10:11:15.118334] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.753 [2024-06-10 10:11:15.118772] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.753 [2024-06-10 10:11:15.118794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.753 BaseBdev2 00:07:09.753 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:10.010 [2024-06-10 10:11:15.374261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.010 [2024-06-10 10:11:15.374648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.010 [2024-06-10 10:11:15.374692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e2eef00 00:07:10.010 [2024-06-10 10:11:15.374697] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.010 [2024-06-10 10:11:15.374721] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e35ae20 00:07:10.010 [2024-06-10 10:11:15.374769] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e2eef00 00:07:10.010 [2024-06-10 10:11:15.374773] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82e2eef00 00:07:10.010 [2024-06-10 10:11:15.374790] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.010 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.269 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:10.269 "name": "raid_bdev1", 00:07:10.269 "uuid": "c5883ee4-2711-11ef-b084-113036b5c18d", 00:07:10.269 "strip_size_kb": 64, 00:07:10.269 "state": "online", 00:07:10.269 "raid_level": "raid0", 00:07:10.269 "superblock": true, 00:07:10.269 "num_base_bdevs": 2, 00:07:10.269 "num_base_bdevs_discovered": 2, 00:07:10.269 "num_base_bdevs_operational": 2, 00:07:10.269 "base_bdevs_list": [ 00:07:10.269 { 00:07:10.269 "name": "BaseBdev1", 00:07:10.269 "uuid": "5ba61985-12a4-8252-8c75-13c87d958ba2", 00:07:10.269 "is_configured": true, 00:07:10.269 "data_offset": 2048, 00:07:10.269 "data_size": 63488 00:07:10.269 }, 00:07:10.269 { 00:07:10.269 "name": "BaseBdev2", 00:07:10.269 "uuid": "41a4744d-c6ef-2f5b-ab08-012a4ae4eb83", 00:07:10.269 "is_configured": true, 00:07:10.269 "data_offset": 2048, 00:07:10.269 "data_size": 63488 00:07:10.269 } 00:07:10.269 ] 00:07:10.269 }' 00:07:10.269 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:10.269 10:11:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.528 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:10.528 10:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:10.528 [2024-06-10 10:11:16.038339] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e35aec0 00:07:11.465 10:11:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.033 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.293 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:12.293 "name": "raid_bdev1", 00:07:12.293 "uuid": "c5883ee4-2711-11ef-b084-113036b5c18d", 00:07:12.293 "strip_size_kb": 64, 00:07:12.293 "state": "online", 00:07:12.293 "raid_level": "raid0", 00:07:12.293 "superblock": true, 00:07:12.293 "num_base_bdevs": 2, 00:07:12.293 "num_base_bdevs_discovered": 2, 00:07:12.293 "num_base_bdevs_operational": 2, 00:07:12.293 "base_bdevs_list": [ 00:07:12.293 { 00:07:12.293 "name": "BaseBdev1", 00:07:12.293 "uuid": "5ba61985-12a4-8252-8c75-13c87d958ba2", 00:07:12.293 "is_configured": true, 00:07:12.293 "data_offset": 2048, 00:07:12.293 "data_size": 63488 00:07:12.293 }, 00:07:12.293 { 00:07:12.293 "name": "BaseBdev2", 00:07:12.293 "uuid": "41a4744d-c6ef-2f5b-ab08-012a4ae4eb83", 00:07:12.293 "is_configured": true, 00:07:12.293 "data_offset": 2048, 00:07:12.293 "data_size": 63488 00:07:12.293 } 00:07:12.293 ] 00:07:12.293 }' 00:07:12.293 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:12.293 10:11:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.629 10:11:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:12.629 [2024-06-10 10:11:18.211232] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.629 [2024-06-10 10:11:18.211258] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.629 [2024-06-10 10:11:18.211562] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.629 [2024-06-10 10:11:18.211570] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.629 [2024-06-10 10:11:18.211576] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.629 [2024-06-10 10:11:18.211580] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e2eef00 name raid_bdev1, state offline 00:07:12.629 0 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50336 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 50336 ']' 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 50336 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 50336 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 50336' 00:07:12.887 killing process with pid 50336 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 50336 00:07:12.887 [2024-06-10 10:11:18.241119] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 50336 00:07:12.887 [2024-06-10 10:11:18.250488] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZiQscWEO 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.46 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.46 != \0\.\0\0 ]] 00:07:12.887 00:07:12.887 real 0m5.919s 00:07:12.887 user 0m9.164s 00:07:12.887 sys 0m0.999s 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:12.887 10:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 ************************************ 00:07:12.887 END TEST raid_write_error_test 00:07:12.887 ************************************ 00:07:12.887 10:11:18 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:12.887 10:11:18 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:12.887 10:11:18 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:12.887 10:11:18 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:12.887 10:11:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.887 ************************************ 00:07:12.887 START TEST raid_state_function_test 00:07:12.887 ************************************ 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 2 false 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:12.887 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50462 00:07:12.888 Process raid pid: 50462 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50462' 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50462 /var/tmp/spdk-raid.sock 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 50462 ']' 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:12.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:12.888 10:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.145 [2024-06-10 10:11:18.490665] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:13.145 [2024-06-10 10:11:18.490908] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:13.403 EAL: TSC is not safe to use in SMP mode 00:07:13.403 EAL: TSC is not invariant 00:07:13.403 [2024-06-10 10:11:18.996460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.662 [2024-06-10 10:11:19.075283] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:13.662 [2024-06-10 10:11:19.077358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.662 [2024-06-10 10:11:19.078030] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.662 [2024-06-10 10:11:19.078041] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:14.229 [2024-06-10 10:11:19.752154] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.229 [2024-06-10 10:11:19.752204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.229 [2024-06-10 10:11:19.752209] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.229 [2024-06-10 10:11:19.752216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:14.229 10:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.794 10:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:14.794 "name": "Existed_Raid", 00:07:14.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.794 "strip_size_kb": 64, 00:07:14.794 "state": "configuring", 00:07:14.794 "raid_level": "concat", 00:07:14.794 "superblock": false, 00:07:14.794 "num_base_bdevs": 2, 00:07:14.794 "num_base_bdevs_discovered": 0, 00:07:14.794 "num_base_bdevs_operational": 2, 00:07:14.794 "base_bdevs_list": [ 00:07:14.794 { 00:07:14.794 "name": "BaseBdev1", 00:07:14.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.794 "is_configured": false, 00:07:14.794 "data_offset": 0, 00:07:14.794 "data_size": 0 00:07:14.794 }, 00:07:14.794 { 00:07:14.794 "name": "BaseBdev2", 00:07:14.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.794 "is_configured": false, 00:07:14.794 "data_offset": 0, 00:07:14.794 "data_size": 0 00:07:14.794 } 00:07:14.794 ] 00:07:14.794 }' 00:07:14.794 10:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:14.794 10:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.051 10:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:15.313 [2024-06-10 10:11:20.716181] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:15.313 [2024-06-10 10:11:20.716205] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c489500 name Existed_Raid, state configuring 00:07:15.313 10:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:15.572 [2024-06-10 10:11:20.968185] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:15.572 [2024-06-10 10:11:20.968226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:15.572 [2024-06-10 10:11:20.968230] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.572 [2024-06-10 10:11:20.968236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.572 10:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.830 [2024-06-10 10:11:21.181100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.830 BaseBdev1 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:07:15.830 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:16.087 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:16.345 [ 00:07:16.345 { 00:07:16.345 "name": "BaseBdev1", 00:07:16.345 "aliases": [ 00:07:16.345 "c8fe29b8-2711-11ef-b084-113036b5c18d" 00:07:16.345 ], 00:07:16.345 "product_name": "Malloc disk", 00:07:16.345 "block_size": 512, 00:07:16.345 "num_blocks": 65536, 00:07:16.345 "uuid": "c8fe29b8-2711-11ef-b084-113036b5c18d", 00:07:16.345 "assigned_rate_limits": { 00:07:16.345 "rw_ios_per_sec": 0, 00:07:16.345 "rw_mbytes_per_sec": 0, 00:07:16.345 "r_mbytes_per_sec": 0, 00:07:16.345 "w_mbytes_per_sec": 0 00:07:16.345 }, 00:07:16.345 "claimed": true, 00:07:16.345 "claim_type": "exclusive_write", 00:07:16.345 "zoned": false, 00:07:16.345 "supported_io_types": { 00:07:16.345 "read": true, 00:07:16.345 "write": true, 00:07:16.345 "unmap": true, 00:07:16.345 "write_zeroes": true, 00:07:16.345 "flush": true, 00:07:16.345 "reset": true, 00:07:16.345 "compare": false, 00:07:16.345 "compare_and_write": false, 00:07:16.345 "abort": true, 00:07:16.345 "nvme_admin": false, 00:07:16.345 "nvme_io": false 00:07:16.345 }, 00:07:16.345 "memory_domains": [ 00:07:16.345 { 00:07:16.345 "dma_device_id": "system", 00:07:16.345 "dma_device_type": 1 00:07:16.345 }, 00:07:16.345 { 00:07:16.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.345 "dma_device_type": 2 00:07:16.345 } 00:07:16.345 ], 00:07:16.345 "driver_specific": {} 00:07:16.345 } 00:07:16.345 ] 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.345 10:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.627 10:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:16.627 "name": "Existed_Raid", 00:07:16.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.627 "strip_size_kb": 64, 00:07:16.627 "state": "configuring", 00:07:16.627 "raid_level": "concat", 00:07:16.627 "superblock": false, 00:07:16.627 "num_base_bdevs": 2, 00:07:16.627 "num_base_bdevs_discovered": 1, 00:07:16.627 "num_base_bdevs_operational": 2, 00:07:16.627 "base_bdevs_list": [ 00:07:16.627 { 00:07:16.628 "name": "BaseBdev1", 00:07:16.628 "uuid": "c8fe29b8-2711-11ef-b084-113036b5c18d", 00:07:16.628 "is_configured": true, 00:07:16.628 "data_offset": 0, 00:07:16.628 "data_size": 65536 00:07:16.628 }, 00:07:16.628 { 00:07:16.628 "name": "BaseBdev2", 00:07:16.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.628 "is_configured": false, 00:07:16.628 "data_offset": 0, 00:07:16.628 "data_size": 0 00:07:16.628 } 00:07:16.628 ] 00:07:16.628 }' 00:07:16.628 10:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:16.628 10:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 10:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:17.451 [2024-06-10 10:11:22.940243] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.452 [2024-06-10 10:11:22.940276] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c489500 name Existed_Raid, state configuring 00:07:17.452 10:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:17.709 [2024-06-10 10:11:23.300270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.709 [2024-06-10 10:11:23.301024] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.709 [2024-06-10 10:11:23.301069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.967 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.224 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:18.224 "name": "Existed_Raid", 00:07:18.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.224 "strip_size_kb": 64, 00:07:18.224 "state": "configuring", 00:07:18.224 "raid_level": "concat", 00:07:18.224 "superblock": false, 00:07:18.224 "num_base_bdevs": 2, 00:07:18.224 "num_base_bdevs_discovered": 1, 00:07:18.224 "num_base_bdevs_operational": 2, 00:07:18.224 "base_bdevs_list": [ 00:07:18.224 { 00:07:18.224 "name": "BaseBdev1", 00:07:18.224 "uuid": "c8fe29b8-2711-11ef-b084-113036b5c18d", 00:07:18.224 "is_configured": true, 00:07:18.224 "data_offset": 0, 00:07:18.224 "data_size": 65536 00:07:18.224 }, 00:07:18.224 { 00:07:18.224 "name": "BaseBdev2", 00:07:18.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.225 "is_configured": false, 00:07:18.225 "data_offset": 0, 00:07:18.225 "data_size": 0 00:07:18.225 } 00:07:18.225 ] 00:07:18.225 }' 00:07:18.225 10:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:18.225 10:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.482 10:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:19.047 [2024-06-10 10:11:24.440464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.047 [2024-06-10 10:11:24.440501] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c489a00 00:07:19.047 [2024-06-10 10:11:24.440513] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.047 [2024-06-10 10:11:24.440544] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c4ecec0 00:07:19.047 [2024-06-10 10:11:24.440631] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c489a00 00:07:19.047 [2024-06-10 10:11:24.440635] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c489a00 00:07:19.047 [2024-06-10 10:11:24.440666] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.047 BaseBdev2 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:07:19.047 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:19.304 10:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:19.560 [ 00:07:19.560 { 00:07:19.560 "name": "BaseBdev2", 00:07:19.560 "aliases": [ 00:07:19.560 "caef9e1d-2711-11ef-b084-113036b5c18d" 00:07:19.560 ], 00:07:19.560 "product_name": "Malloc disk", 00:07:19.560 "block_size": 512, 00:07:19.560 "num_blocks": 65536, 00:07:19.560 "uuid": "caef9e1d-2711-11ef-b084-113036b5c18d", 00:07:19.560 "assigned_rate_limits": { 00:07:19.560 "rw_ios_per_sec": 0, 00:07:19.560 "rw_mbytes_per_sec": 0, 00:07:19.560 "r_mbytes_per_sec": 0, 00:07:19.560 "w_mbytes_per_sec": 0 00:07:19.560 }, 00:07:19.560 "claimed": true, 00:07:19.560 "claim_type": "exclusive_write", 00:07:19.560 "zoned": false, 00:07:19.560 "supported_io_types": { 00:07:19.560 "read": true, 00:07:19.560 "write": true, 00:07:19.560 "unmap": true, 00:07:19.560 "write_zeroes": true, 00:07:19.560 "flush": true, 00:07:19.560 "reset": true, 00:07:19.560 "compare": false, 00:07:19.560 "compare_and_write": false, 00:07:19.560 "abort": true, 00:07:19.560 "nvme_admin": false, 00:07:19.560 "nvme_io": false 00:07:19.560 }, 00:07:19.560 "memory_domains": [ 00:07:19.560 { 00:07:19.560 "dma_device_id": "system", 00:07:19.560 "dma_device_type": 1 00:07:19.560 }, 00:07:19.560 { 00:07:19.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.560 "dma_device_type": 2 00:07:19.560 } 00:07:19.560 ], 00:07:19.560 "driver_specific": {} 00:07:19.560 } 00:07:19.560 ] 00:07:19.560 10:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:19.561 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.124 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.124 "name": "Existed_Raid", 00:07:20.124 "uuid": "caefa5fb-2711-11ef-b084-113036b5c18d", 00:07:20.124 "strip_size_kb": 64, 00:07:20.124 "state": "online", 00:07:20.124 "raid_level": "concat", 00:07:20.124 "superblock": false, 00:07:20.124 "num_base_bdevs": 2, 00:07:20.124 "num_base_bdevs_discovered": 2, 00:07:20.124 "num_base_bdevs_operational": 2, 00:07:20.124 "base_bdevs_list": [ 00:07:20.124 { 00:07:20.124 "name": "BaseBdev1", 00:07:20.124 "uuid": "c8fe29b8-2711-11ef-b084-113036b5c18d", 00:07:20.124 "is_configured": true, 00:07:20.124 "data_offset": 0, 00:07:20.124 "data_size": 65536 00:07:20.124 }, 00:07:20.124 { 00:07:20.124 "name": "BaseBdev2", 00:07:20.124 "uuid": "caef9e1d-2711-11ef-b084-113036b5c18d", 00:07:20.124 "is_configured": true, 00:07:20.124 "data_offset": 0, 00:07:20.124 "data_size": 65536 00:07:20.124 } 00:07:20.124 ] 00:07:20.124 }' 00:07:20.124 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.124 10:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:20.380 10:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:20.636 [2024-06-10 10:11:26.216383] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.893 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:20.893 "name": "Existed_Raid", 00:07:20.893 "aliases": [ 00:07:20.893 "caefa5fb-2711-11ef-b084-113036b5c18d" 00:07:20.893 ], 00:07:20.893 "product_name": "Raid Volume", 00:07:20.893 "block_size": 512, 00:07:20.893 "num_blocks": 131072, 00:07:20.893 "uuid": "caefa5fb-2711-11ef-b084-113036b5c18d", 00:07:20.893 "assigned_rate_limits": { 00:07:20.893 "rw_ios_per_sec": 0, 00:07:20.893 "rw_mbytes_per_sec": 0, 00:07:20.893 "r_mbytes_per_sec": 0, 00:07:20.893 "w_mbytes_per_sec": 0 00:07:20.893 }, 00:07:20.893 "claimed": false, 00:07:20.893 "zoned": false, 00:07:20.893 "supported_io_types": { 00:07:20.893 "read": true, 00:07:20.893 "write": true, 00:07:20.893 "unmap": true, 00:07:20.893 "write_zeroes": true, 00:07:20.893 "flush": true, 00:07:20.893 "reset": true, 00:07:20.893 "compare": false, 00:07:20.893 "compare_and_write": false, 00:07:20.893 "abort": false, 00:07:20.893 "nvme_admin": false, 00:07:20.893 "nvme_io": false 00:07:20.893 }, 00:07:20.893 "memory_domains": [ 00:07:20.893 { 00:07:20.893 "dma_device_id": "system", 00:07:20.893 "dma_device_type": 1 00:07:20.893 }, 00:07:20.893 { 00:07:20.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.893 "dma_device_type": 2 00:07:20.893 }, 00:07:20.893 { 00:07:20.893 "dma_device_id": "system", 00:07:20.893 "dma_device_type": 1 00:07:20.893 }, 00:07:20.893 { 00:07:20.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.893 "dma_device_type": 2 00:07:20.893 } 00:07:20.893 ], 00:07:20.893 "driver_specific": { 00:07:20.893 "raid": { 00:07:20.893 "uuid": "caefa5fb-2711-11ef-b084-113036b5c18d", 00:07:20.893 "strip_size_kb": 64, 00:07:20.893 "state": "online", 00:07:20.893 "raid_level": "concat", 00:07:20.893 "superblock": false, 00:07:20.893 "num_base_bdevs": 2, 00:07:20.893 "num_base_bdevs_discovered": 2, 00:07:20.893 "num_base_bdevs_operational": 2, 00:07:20.893 "base_bdevs_list": [ 00:07:20.893 { 00:07:20.893 "name": "BaseBdev1", 00:07:20.893 "uuid": "c8fe29b8-2711-11ef-b084-113036b5c18d", 00:07:20.893 "is_configured": true, 00:07:20.893 "data_offset": 0, 00:07:20.893 "data_size": 65536 00:07:20.893 }, 00:07:20.893 { 00:07:20.893 "name": "BaseBdev2", 00:07:20.893 "uuid": "caef9e1d-2711-11ef-b084-113036b5c18d", 00:07:20.893 "is_configured": true, 00:07:20.893 "data_offset": 0, 00:07:20.893 "data_size": 65536 00:07:20.893 } 00:07:20.893 ] 00:07:20.893 } 00:07:20.893 } 00:07:20.893 }' 00:07:20.893 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.893 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:20.893 BaseBdev2' 00:07:20.893 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:20.893 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:20.893 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:21.150 "name": "BaseBdev1", 00:07:21.150 "aliases": [ 00:07:21.150 "c8fe29b8-2711-11ef-b084-113036b5c18d" 00:07:21.150 ], 00:07:21.150 "product_name": "Malloc disk", 00:07:21.150 "block_size": 512, 00:07:21.150 "num_blocks": 65536, 00:07:21.150 "uuid": "c8fe29b8-2711-11ef-b084-113036b5c18d", 00:07:21.150 "assigned_rate_limits": { 00:07:21.150 "rw_ios_per_sec": 0, 00:07:21.150 "rw_mbytes_per_sec": 0, 00:07:21.150 "r_mbytes_per_sec": 0, 00:07:21.150 "w_mbytes_per_sec": 0 00:07:21.150 }, 00:07:21.150 "claimed": true, 00:07:21.150 "claim_type": "exclusive_write", 00:07:21.150 "zoned": false, 00:07:21.150 "supported_io_types": { 00:07:21.150 "read": true, 00:07:21.150 "write": true, 00:07:21.150 "unmap": true, 00:07:21.150 "write_zeroes": true, 00:07:21.150 "flush": true, 00:07:21.150 "reset": true, 00:07:21.150 "compare": false, 00:07:21.150 "compare_and_write": false, 00:07:21.150 "abort": true, 00:07:21.150 "nvme_admin": false, 00:07:21.150 "nvme_io": false 00:07:21.150 }, 00:07:21.150 "memory_domains": [ 00:07:21.150 { 00:07:21.150 "dma_device_id": "system", 00:07:21.150 "dma_device_type": 1 00:07:21.150 }, 00:07:21.150 { 00:07:21.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.150 "dma_device_type": 2 00:07:21.150 } 00:07:21.150 ], 00:07:21.150 "driver_specific": {} 00:07:21.150 }' 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:21.150 10:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:21.716 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:21.716 "name": "BaseBdev2", 00:07:21.716 "aliases": [ 00:07:21.716 "caef9e1d-2711-11ef-b084-113036b5c18d" 00:07:21.716 ], 00:07:21.716 "product_name": "Malloc disk", 00:07:21.716 "block_size": 512, 00:07:21.716 "num_blocks": 65536, 00:07:21.716 "uuid": "caef9e1d-2711-11ef-b084-113036b5c18d", 00:07:21.716 "assigned_rate_limits": { 00:07:21.716 "rw_ios_per_sec": 0, 00:07:21.716 "rw_mbytes_per_sec": 0, 00:07:21.716 "r_mbytes_per_sec": 0, 00:07:21.716 "w_mbytes_per_sec": 0 00:07:21.716 }, 00:07:21.716 "claimed": true, 00:07:21.716 "claim_type": "exclusive_write", 00:07:21.716 "zoned": false, 00:07:21.716 "supported_io_types": { 00:07:21.716 "read": true, 00:07:21.716 "write": true, 00:07:21.716 "unmap": true, 00:07:21.716 "write_zeroes": true, 00:07:21.716 "flush": true, 00:07:21.716 "reset": true, 00:07:21.717 "compare": false, 00:07:21.717 "compare_and_write": false, 00:07:21.717 "abort": true, 00:07:21.717 "nvme_admin": false, 00:07:21.717 "nvme_io": false 00:07:21.717 }, 00:07:21.717 "memory_domains": [ 00:07:21.717 { 00:07:21.717 "dma_device_id": "system", 00:07:21.717 "dma_device_type": 1 00:07:21.717 }, 00:07:21.717 { 00:07:21.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.717 "dma_device_type": 2 00:07:21.717 } 00:07:21.717 ], 00:07:21.717 "driver_specific": {} 00:07:21.717 }' 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:21.717 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:21.975 [2024-06-10 10:11:27.412396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.975 [2024-06-10 10:11:27.412423] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.975 [2024-06-10 10:11:27.412439] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.975 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.233 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:22.233 "name": "Existed_Raid", 00:07:22.233 "uuid": "caefa5fb-2711-11ef-b084-113036b5c18d", 00:07:22.233 "strip_size_kb": 64, 00:07:22.233 "state": "offline", 00:07:22.233 "raid_level": "concat", 00:07:22.233 "superblock": false, 00:07:22.233 "num_base_bdevs": 2, 00:07:22.233 "num_base_bdevs_discovered": 1, 00:07:22.233 "num_base_bdevs_operational": 1, 00:07:22.233 "base_bdevs_list": [ 00:07:22.233 { 00:07:22.233 "name": null, 00:07:22.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.233 "is_configured": false, 00:07:22.233 "data_offset": 0, 00:07:22.233 "data_size": 65536 00:07:22.233 }, 00:07:22.233 { 00:07:22.233 "name": "BaseBdev2", 00:07:22.233 "uuid": "caef9e1d-2711-11ef-b084-113036b5c18d", 00:07:22.233 "is_configured": true, 00:07:22.233 "data_offset": 0, 00:07:22.233 "data_size": 65536 00:07:22.233 } 00:07:22.233 ] 00:07:22.233 }' 00:07:22.233 10:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:22.233 10:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.491 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:22.491 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:22.491 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.491 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:22.749 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:22.749 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.749 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:23.314 [2024-06-10 10:11:28.621344] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.314 [2024-06-10 10:11:28.621381] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c489a00 name Existed_Raid, state offline 00:07:23.314 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:23.314 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:23.314 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.314 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50462 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 50462 ']' 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 50462 00:07:23.573 10:11:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 50462 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 50462' 00:07:23.573 killing process with pid 50462 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 50462 00:07:23.573 [2024-06-10 10:11:29.009741] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.573 [2024-06-10 10:11:29.009788] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.573 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 50462 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:23.832 00:07:23.832 real 0m10.706s 00:07:23.832 user 0m19.119s 00:07:23.832 sys 0m1.560s 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.832 ************************************ 00:07:23.832 END TEST raid_state_function_test 00:07:23.832 ************************************ 00:07:23.832 10:11:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:23.832 10:11:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:23.832 10:11:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:23.832 10:11:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.832 ************************************ 00:07:23.832 START TEST raid_state_function_test_sb 00:07:23.832 ************************************ 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 2 true 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=50741 00:07:23.832 Process raid pid: 50741 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50741' 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 50741 /var/tmp/spdk-raid.sock 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 50741 ']' 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:23.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:23.832 10:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.832 [2024-06-10 10:11:29.242027] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:23.832 [2024-06-10 10:11:29.242220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:24.400 EAL: TSC is not safe to use in SMP mode 00:07:24.400 EAL: TSC is not invariant 00:07:24.400 [2024-06-10 10:11:29.721244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.400 [2024-06-10 10:11:29.799865] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:24.400 [2024-06-10 10:11:29.802368] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.400 [2024-06-10 10:11:29.803350] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.400 [2024-06-10 10:11:29.803368] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.966 10:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:24.966 10:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:07:24.966 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:25.224 [2024-06-10 10:11:30.693703] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.224 [2024-06-10 10:11:30.693765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.224 [2024-06-10 10:11:30.693770] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.224 [2024-06-10 10:11:30.693778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.224 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.483 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:25.483 "name": "Existed_Raid", 00:07:25.483 "uuid": "cea9ce84-2711-11ef-b084-113036b5c18d", 00:07:25.483 "strip_size_kb": 64, 00:07:25.483 "state": "configuring", 00:07:25.483 "raid_level": "concat", 00:07:25.483 "superblock": true, 00:07:25.483 "num_base_bdevs": 2, 00:07:25.483 "num_base_bdevs_discovered": 0, 00:07:25.483 "num_base_bdevs_operational": 2, 00:07:25.483 "base_bdevs_list": [ 00:07:25.483 { 00:07:25.483 "name": "BaseBdev1", 00:07:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.483 "is_configured": false, 00:07:25.483 "data_offset": 0, 00:07:25.483 "data_size": 0 00:07:25.483 }, 00:07:25.483 { 00:07:25.483 "name": "BaseBdev2", 00:07:25.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.483 "is_configured": false, 00:07:25.483 "data_offset": 0, 00:07:25.483 "data_size": 0 00:07:25.483 } 00:07:25.483 ] 00:07:25.483 }' 00:07:25.483 10:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:25.483 10:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.804 10:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:26.062 [2024-06-10 10:11:31.513678] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.062 [2024-06-10 10:11:31.513702] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b39d500 name Existed_Raid, state configuring 00:07:26.062 10:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:26.321 [2024-06-10 10:11:31.801715] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.321 [2024-06-10 10:11:31.801771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.321 [2024-06-10 10:11:31.801776] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.321 [2024-06-10 10:11:31.801784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.321 10:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.580 [2024-06-10 10:11:32.014587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.580 BaseBdev1 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:07:26.580 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:26.837 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.095 [ 00:07:27.095 { 00:07:27.095 "name": "BaseBdev1", 00:07:27.095 "aliases": [ 00:07:27.095 "cf733947-2711-11ef-b084-113036b5c18d" 00:07:27.095 ], 00:07:27.095 "product_name": "Malloc disk", 00:07:27.095 "block_size": 512, 00:07:27.095 "num_blocks": 65536, 00:07:27.095 "uuid": "cf733947-2711-11ef-b084-113036b5c18d", 00:07:27.095 "assigned_rate_limits": { 00:07:27.095 "rw_ios_per_sec": 0, 00:07:27.095 "rw_mbytes_per_sec": 0, 00:07:27.095 "r_mbytes_per_sec": 0, 00:07:27.095 "w_mbytes_per_sec": 0 00:07:27.095 }, 00:07:27.095 "claimed": true, 00:07:27.095 "claim_type": "exclusive_write", 00:07:27.095 "zoned": false, 00:07:27.095 "supported_io_types": { 00:07:27.095 "read": true, 00:07:27.095 "write": true, 00:07:27.095 "unmap": true, 00:07:27.095 "write_zeroes": true, 00:07:27.095 "flush": true, 00:07:27.095 "reset": true, 00:07:27.095 "compare": false, 00:07:27.095 "compare_and_write": false, 00:07:27.095 "abort": true, 00:07:27.095 "nvme_admin": false, 00:07:27.095 "nvme_io": false 00:07:27.095 }, 00:07:27.095 "memory_domains": [ 00:07:27.095 { 00:07:27.095 "dma_device_id": "system", 00:07:27.095 "dma_device_type": 1 00:07:27.095 }, 00:07:27.095 { 00:07:27.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.095 "dma_device_type": 2 00:07:27.095 } 00:07:27.095 ], 00:07:27.095 "driver_specific": {} 00:07:27.095 } 00:07:27.095 ] 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.095 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.353 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:27.353 "name": "Existed_Raid", 00:07:27.353 "uuid": "cf52e043-2711-11ef-b084-113036b5c18d", 00:07:27.353 "strip_size_kb": 64, 00:07:27.353 "state": "configuring", 00:07:27.353 "raid_level": "concat", 00:07:27.353 "superblock": true, 00:07:27.353 "num_base_bdevs": 2, 00:07:27.353 "num_base_bdevs_discovered": 1, 00:07:27.353 "num_base_bdevs_operational": 2, 00:07:27.353 "base_bdevs_list": [ 00:07:27.353 { 00:07:27.353 "name": "BaseBdev1", 00:07:27.353 "uuid": "cf733947-2711-11ef-b084-113036b5c18d", 00:07:27.353 "is_configured": true, 00:07:27.353 "data_offset": 2048, 00:07:27.353 "data_size": 63488 00:07:27.353 }, 00:07:27.353 { 00:07:27.353 "name": "BaseBdev2", 00:07:27.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.353 "is_configured": false, 00:07:27.353 "data_offset": 0, 00:07:27.353 "data_size": 0 00:07:27.353 } 00:07:27.353 ] 00:07:27.353 }' 00:07:27.353 10:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:27.353 10:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.611 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:27.870 [2024-06-10 10:11:33.301733] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.870 [2024-06-10 10:11:33.301761] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b39d500 name Existed_Raid, state configuring 00:07:27.870 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:28.129 [2024-06-10 10:11:33.565751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.129 [2024-06-10 10:11:33.566412] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.129 [2024-06-10 10:11:33.566460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.129 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.388 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:28.388 "name": "Existed_Raid", 00:07:28.388 "uuid": "d0600bce-2711-11ef-b084-113036b5c18d", 00:07:28.388 "strip_size_kb": 64, 00:07:28.388 "state": "configuring", 00:07:28.388 "raid_level": "concat", 00:07:28.388 "superblock": true, 00:07:28.388 "num_base_bdevs": 2, 00:07:28.388 "num_base_bdevs_discovered": 1, 00:07:28.388 "num_base_bdevs_operational": 2, 00:07:28.388 "base_bdevs_list": [ 00:07:28.388 { 00:07:28.388 "name": "BaseBdev1", 00:07:28.388 "uuid": "cf733947-2711-11ef-b084-113036b5c18d", 00:07:28.388 "is_configured": true, 00:07:28.388 "data_offset": 2048, 00:07:28.388 "data_size": 63488 00:07:28.388 }, 00:07:28.388 { 00:07:28.388 "name": "BaseBdev2", 00:07:28.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.388 "is_configured": false, 00:07:28.388 "data_offset": 0, 00:07:28.388 "data_size": 0 00:07:28.388 } 00:07:28.388 ] 00:07:28.388 }' 00:07:28.388 10:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:28.388 10:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.648 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.907 [2024-06-10 10:11:34.361915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.907 [2024-06-10 10:11:34.361972] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b39da00 00:07:28.907 [2024-06-10 10:11:34.361977] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.907 [2024-06-10 10:11:34.361998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b400ec0 00:07:28.907 [2024-06-10 10:11:34.362028] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b39da00 00:07:28.907 [2024-06-10 10:11:34.362052] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b39da00 00:07:28.907 [2024-06-10 10:11:34.362068] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.907 BaseBdev2 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:07:28.907 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:29.166 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.426 [ 00:07:29.426 { 00:07:29.426 "name": "BaseBdev2", 00:07:29.426 "aliases": [ 00:07:29.426 "d0d983e2-2711-11ef-b084-113036b5c18d" 00:07:29.426 ], 00:07:29.426 "product_name": "Malloc disk", 00:07:29.426 "block_size": 512, 00:07:29.426 "num_blocks": 65536, 00:07:29.426 "uuid": "d0d983e2-2711-11ef-b084-113036b5c18d", 00:07:29.426 "assigned_rate_limits": { 00:07:29.426 "rw_ios_per_sec": 0, 00:07:29.426 "rw_mbytes_per_sec": 0, 00:07:29.426 "r_mbytes_per_sec": 0, 00:07:29.426 "w_mbytes_per_sec": 0 00:07:29.426 }, 00:07:29.426 "claimed": true, 00:07:29.426 "claim_type": "exclusive_write", 00:07:29.426 "zoned": false, 00:07:29.426 "supported_io_types": { 00:07:29.426 "read": true, 00:07:29.426 "write": true, 00:07:29.426 "unmap": true, 00:07:29.426 "write_zeroes": true, 00:07:29.426 "flush": true, 00:07:29.426 "reset": true, 00:07:29.426 "compare": false, 00:07:29.426 "compare_and_write": false, 00:07:29.426 "abort": true, 00:07:29.426 "nvme_admin": false, 00:07:29.426 "nvme_io": false 00:07:29.426 }, 00:07:29.426 "memory_domains": [ 00:07:29.426 { 00:07:29.426 "dma_device_id": "system", 00:07:29.426 "dma_device_type": 1 00:07:29.426 }, 00:07:29.426 { 00:07:29.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.426 "dma_device_type": 2 00:07:29.426 } 00:07:29.426 ], 00:07:29.426 "driver_specific": {} 00:07:29.426 } 00:07:29.426 ] 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.426 10:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.685 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:29.685 "name": "Existed_Raid", 00:07:29.685 "uuid": "d0600bce-2711-11ef-b084-113036b5c18d", 00:07:29.685 "strip_size_kb": 64, 00:07:29.685 "state": "online", 00:07:29.685 "raid_level": "concat", 00:07:29.685 "superblock": true, 00:07:29.685 "num_base_bdevs": 2, 00:07:29.685 "num_base_bdevs_discovered": 2, 00:07:29.685 "num_base_bdevs_operational": 2, 00:07:29.685 "base_bdevs_list": [ 00:07:29.685 { 00:07:29.685 "name": "BaseBdev1", 00:07:29.685 "uuid": "cf733947-2711-11ef-b084-113036b5c18d", 00:07:29.685 "is_configured": true, 00:07:29.685 "data_offset": 2048, 00:07:29.685 "data_size": 63488 00:07:29.685 }, 00:07:29.685 { 00:07:29.685 "name": "BaseBdev2", 00:07:29.685 "uuid": "d0d983e2-2711-11ef-b084-113036b5c18d", 00:07:29.685 "is_configured": true, 00:07:29.685 "data_offset": 2048, 00:07:29.685 "data_size": 63488 00:07:29.685 } 00:07:29.685 ] 00:07:29.685 }' 00:07:29.685 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:29.685 10:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:30.064 [2024-06-10 10:11:35.569869] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.064 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:30.064 "name": "Existed_Raid", 00:07:30.064 "aliases": [ 00:07:30.064 "d0600bce-2711-11ef-b084-113036b5c18d" 00:07:30.064 ], 00:07:30.064 "product_name": "Raid Volume", 00:07:30.064 "block_size": 512, 00:07:30.064 "num_blocks": 126976, 00:07:30.064 "uuid": "d0600bce-2711-11ef-b084-113036b5c18d", 00:07:30.064 "assigned_rate_limits": { 00:07:30.064 "rw_ios_per_sec": 0, 00:07:30.064 "rw_mbytes_per_sec": 0, 00:07:30.064 "r_mbytes_per_sec": 0, 00:07:30.064 "w_mbytes_per_sec": 0 00:07:30.064 }, 00:07:30.065 "claimed": false, 00:07:30.065 "zoned": false, 00:07:30.065 "supported_io_types": { 00:07:30.065 "read": true, 00:07:30.065 "write": true, 00:07:30.065 "unmap": true, 00:07:30.065 "write_zeroes": true, 00:07:30.065 "flush": true, 00:07:30.065 "reset": true, 00:07:30.065 "compare": false, 00:07:30.065 "compare_and_write": false, 00:07:30.065 "abort": false, 00:07:30.065 "nvme_admin": false, 00:07:30.065 "nvme_io": false 00:07:30.065 }, 00:07:30.065 "memory_domains": [ 00:07:30.065 { 00:07:30.065 "dma_device_id": "system", 00:07:30.065 "dma_device_type": 1 00:07:30.065 }, 00:07:30.065 { 00:07:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.065 "dma_device_type": 2 00:07:30.065 }, 00:07:30.065 { 00:07:30.065 "dma_device_id": "system", 00:07:30.065 "dma_device_type": 1 00:07:30.065 }, 00:07:30.065 { 00:07:30.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.065 "dma_device_type": 2 00:07:30.065 } 00:07:30.065 ], 00:07:30.065 "driver_specific": { 00:07:30.065 "raid": { 00:07:30.065 "uuid": "d0600bce-2711-11ef-b084-113036b5c18d", 00:07:30.065 "strip_size_kb": 64, 00:07:30.065 "state": "online", 00:07:30.065 "raid_level": "concat", 00:07:30.065 "superblock": true, 00:07:30.065 "num_base_bdevs": 2, 00:07:30.065 "num_base_bdevs_discovered": 2, 00:07:30.065 "num_base_bdevs_operational": 2, 00:07:30.065 "base_bdevs_list": [ 00:07:30.065 { 00:07:30.065 "name": "BaseBdev1", 00:07:30.065 "uuid": "cf733947-2711-11ef-b084-113036b5c18d", 00:07:30.065 "is_configured": true, 00:07:30.065 "data_offset": 2048, 00:07:30.065 "data_size": 63488 00:07:30.065 }, 00:07:30.065 { 00:07:30.065 "name": "BaseBdev2", 00:07:30.065 "uuid": "d0d983e2-2711-11ef-b084-113036b5c18d", 00:07:30.065 "is_configured": true, 00:07:30.065 "data_offset": 2048, 00:07:30.065 "data_size": 63488 00:07:30.065 } 00:07:30.065 ] 00:07:30.065 } 00:07:30.065 } 00:07:30.065 }' 00:07:30.065 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:30.065 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:30.065 BaseBdev2' 00:07:30.065 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:30.065 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:30.065 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:30.338 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:30.338 "name": "BaseBdev1", 00:07:30.338 "aliases": [ 00:07:30.338 "cf733947-2711-11ef-b084-113036b5c18d" 00:07:30.338 ], 00:07:30.338 "product_name": "Malloc disk", 00:07:30.338 "block_size": 512, 00:07:30.338 "num_blocks": 65536, 00:07:30.338 "uuid": "cf733947-2711-11ef-b084-113036b5c18d", 00:07:30.338 "assigned_rate_limits": { 00:07:30.338 "rw_ios_per_sec": 0, 00:07:30.338 "rw_mbytes_per_sec": 0, 00:07:30.338 "r_mbytes_per_sec": 0, 00:07:30.338 "w_mbytes_per_sec": 0 00:07:30.338 }, 00:07:30.338 "claimed": true, 00:07:30.338 "claim_type": "exclusive_write", 00:07:30.338 "zoned": false, 00:07:30.338 "supported_io_types": { 00:07:30.338 "read": true, 00:07:30.338 "write": true, 00:07:30.338 "unmap": true, 00:07:30.338 "write_zeroes": true, 00:07:30.338 "flush": true, 00:07:30.338 "reset": true, 00:07:30.338 "compare": false, 00:07:30.338 "compare_and_write": false, 00:07:30.338 "abort": true, 00:07:30.338 "nvme_admin": false, 00:07:30.338 "nvme_io": false 00:07:30.338 }, 00:07:30.338 "memory_domains": [ 00:07:30.338 { 00:07:30.338 "dma_device_id": "system", 00:07:30.338 "dma_device_type": 1 00:07:30.338 }, 00:07:30.338 { 00:07:30.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.338 "dma_device_type": 2 00:07:30.338 } 00:07:30.338 ], 00:07:30.339 "driver_specific": {} 00:07:30.339 }' 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:30.339 10:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:30.598 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:30.598 "name": "BaseBdev2", 00:07:30.598 "aliases": [ 00:07:30.598 "d0d983e2-2711-11ef-b084-113036b5c18d" 00:07:30.598 ], 00:07:30.598 "product_name": "Malloc disk", 00:07:30.598 "block_size": 512, 00:07:30.598 "num_blocks": 65536, 00:07:30.598 "uuid": "d0d983e2-2711-11ef-b084-113036b5c18d", 00:07:30.598 "assigned_rate_limits": { 00:07:30.598 "rw_ios_per_sec": 0, 00:07:30.598 "rw_mbytes_per_sec": 0, 00:07:30.598 "r_mbytes_per_sec": 0, 00:07:30.598 "w_mbytes_per_sec": 0 00:07:30.598 }, 00:07:30.598 "claimed": true, 00:07:30.598 "claim_type": "exclusive_write", 00:07:30.598 "zoned": false, 00:07:30.598 "supported_io_types": { 00:07:30.598 "read": true, 00:07:30.598 "write": true, 00:07:30.598 "unmap": true, 00:07:30.598 "write_zeroes": true, 00:07:30.598 "flush": true, 00:07:30.598 "reset": true, 00:07:30.598 "compare": false, 00:07:30.598 "compare_and_write": false, 00:07:30.598 "abort": true, 00:07:30.598 "nvme_admin": false, 00:07:30.598 "nvme_io": false 00:07:30.598 }, 00:07:30.598 "memory_domains": [ 00:07:30.598 { 00:07:30.598 "dma_device_id": "system", 00:07:30.599 "dma_device_type": 1 00:07:30.599 }, 00:07:30.599 { 00:07:30.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.599 "dma_device_type": 2 00:07:30.599 } 00:07:30.599 ], 00:07:30.599 "driver_specific": {} 00:07:30.599 }' 00:07:30.599 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:30.858 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:31.116 [2024-06-10 10:11:36.553903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:31.116 [2024-06-10 10:11:36.553927] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.116 [2024-06-10 10:11:36.553945] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.116 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.375 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:31.375 "name": "Existed_Raid", 00:07:31.375 "uuid": "d0600bce-2711-11ef-b084-113036b5c18d", 00:07:31.375 "strip_size_kb": 64, 00:07:31.375 "state": "offline", 00:07:31.375 "raid_level": "concat", 00:07:31.375 "superblock": true, 00:07:31.375 "num_base_bdevs": 2, 00:07:31.375 "num_base_bdevs_discovered": 1, 00:07:31.375 "num_base_bdevs_operational": 1, 00:07:31.375 "base_bdevs_list": [ 00:07:31.375 { 00:07:31.375 "name": null, 00:07:31.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.375 "is_configured": false, 00:07:31.375 "data_offset": 2048, 00:07:31.375 "data_size": 63488 00:07:31.375 }, 00:07:31.375 { 00:07:31.375 "name": "BaseBdev2", 00:07:31.375 "uuid": "d0d983e2-2711-11ef-b084-113036b5c18d", 00:07:31.375 "is_configured": true, 00:07:31.375 "data_offset": 2048, 00:07:31.375 "data_size": 63488 00:07:31.375 } 00:07:31.375 ] 00:07:31.375 }' 00:07:31.375 10:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:31.375 10:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.943 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:31.943 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:31.943 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:31.943 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.202 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:32.202 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:32.202 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:32.460 [2024-06-10 10:11:37.910768] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:32.460 [2024-06-10 10:11:37.910799] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b39da00 name Existed_Raid, state offline 00:07:32.460 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:32.460 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:32.460 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.460 10:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 50741 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 50741 ']' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 50741 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 50741 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:07:32.720 killing process with pid 50741 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 50741' 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 50741 00:07:32.720 [2024-06-10 10:11:38.152557] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.720 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 50741 00:07:32.720 [2024-06-10 10:11:38.152597] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.979 10:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:32.979 00:07:32.979 real 0m9.107s 00:07:32.979 user 0m15.944s 00:07:32.979 sys 0m1.555s 00:07:32.979 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.979 ************************************ 00:07:32.979 END TEST raid_state_function_test_sb 00:07:32.979 ************************************ 00:07:32.979 10:11:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.979 10:11:38 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:32.979 10:11:38 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:32.979 10:11:38 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.979 10:11:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.979 ************************************ 00:07:32.979 START TEST raid_superblock_test 00:07:32.979 ************************************ 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 2 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51015 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51015 /var/tmp/spdk-raid.sock 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 51015 ']' 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:32.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:32.979 10:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.979 [2024-06-10 10:11:38.394394] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:32.979 [2024-06-10 10:11:38.394629] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:33.547 EAL: TSC is not safe to use in SMP mode 00:07:33.547 EAL: TSC is not invariant 00:07:33.547 [2024-06-10 10:11:38.892011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.547 [2024-06-10 10:11:38.987938] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:33.547 [2024-06-10 10:11:38.990612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.547 [2024-06-10 10:11:38.991490] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.547 [2024-06-10 10:11:38.991505] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:34.115 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:34.374 malloc1 00:07:34.374 10:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:34.633 [2024-06-10 10:11:40.147618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:34.633 [2024-06-10 10:11:40.147687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.633 [2024-06-10 10:11:40.147706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82afb2780 00:07:34.633 [2024-06-10 10:11:40.147713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.633 [2024-06-10 10:11:40.148494] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.633 [2024-06-10 10:11:40.148534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:34.633 pt1 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:34.633 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:34.893 malloc2 00:07:34.893 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:35.151 [2024-06-10 10:11:40.711621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:35.151 [2024-06-10 10:11:40.711675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.151 [2024-06-10 10:11:40.711685] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82afb2c80 00:07:35.151 [2024-06-10 10:11:40.711692] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.151 [2024-06-10 10:11:40.712237] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.151 [2024-06-10 10:11:40.712267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:35.151 pt2 00:07:35.151 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:35.151 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:35.151 10:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:35.719 [2024-06-10 10:11:41.007682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:35.719 [2024-06-10 10:11:41.008200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:35.719 [2024-06-10 10:11:41.008258] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82afb2f00 00:07:35.719 [2024-06-10 10:11:41.008263] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.719 [2024-06-10 10:11:41.008294] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b015e20 00:07:35.719 [2024-06-10 10:11:41.008355] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82afb2f00 00:07:35.719 [2024-06-10 10:11:41.008359] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82afb2f00 00:07:35.719 [2024-06-10 10:11:41.008381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:35.719 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:35.720 "name": "raid_bdev1", 00:07:35.720 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:35.720 "strip_size_kb": 64, 00:07:35.720 "state": "online", 00:07:35.720 "raid_level": "concat", 00:07:35.720 "superblock": true, 00:07:35.720 "num_base_bdevs": 2, 00:07:35.720 "num_base_bdevs_discovered": 2, 00:07:35.720 "num_base_bdevs_operational": 2, 00:07:35.720 "base_bdevs_list": [ 00:07:35.720 { 00:07:35.720 "name": "pt1", 00:07:35.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.720 "is_configured": true, 00:07:35.720 "data_offset": 2048, 00:07:35.720 "data_size": 63488 00:07:35.720 }, 00:07:35.720 { 00:07:35.720 "name": "pt2", 00:07:35.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.720 "is_configured": true, 00:07:35.720 "data_offset": 2048, 00:07:35.720 "data_size": 63488 00:07:35.720 } 00:07:35.720 ] 00:07:35.720 }' 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:35.720 10:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:36.288 [2024-06-10 10:11:41.851728] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:36.288 "name": "raid_bdev1", 00:07:36.288 "aliases": [ 00:07:36.288 "d4cf9836-2711-11ef-b084-113036b5c18d" 00:07:36.288 ], 00:07:36.288 "product_name": "Raid Volume", 00:07:36.288 "block_size": 512, 00:07:36.288 "num_blocks": 126976, 00:07:36.288 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:36.288 "assigned_rate_limits": { 00:07:36.288 "rw_ios_per_sec": 0, 00:07:36.288 "rw_mbytes_per_sec": 0, 00:07:36.288 "r_mbytes_per_sec": 0, 00:07:36.288 "w_mbytes_per_sec": 0 00:07:36.288 }, 00:07:36.288 "claimed": false, 00:07:36.288 "zoned": false, 00:07:36.288 "supported_io_types": { 00:07:36.288 "read": true, 00:07:36.288 "write": true, 00:07:36.288 "unmap": true, 00:07:36.288 "write_zeroes": true, 00:07:36.288 "flush": true, 00:07:36.288 "reset": true, 00:07:36.288 "compare": false, 00:07:36.288 "compare_and_write": false, 00:07:36.288 "abort": false, 00:07:36.288 "nvme_admin": false, 00:07:36.288 "nvme_io": false 00:07:36.288 }, 00:07:36.288 "memory_domains": [ 00:07:36.288 { 00:07:36.288 "dma_device_id": "system", 00:07:36.288 "dma_device_type": 1 00:07:36.288 }, 00:07:36.288 { 00:07:36.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.288 "dma_device_type": 2 00:07:36.288 }, 00:07:36.288 { 00:07:36.288 "dma_device_id": "system", 00:07:36.288 "dma_device_type": 1 00:07:36.288 }, 00:07:36.288 { 00:07:36.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.288 "dma_device_type": 2 00:07:36.288 } 00:07:36.288 ], 00:07:36.288 "driver_specific": { 00:07:36.288 "raid": { 00:07:36.288 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:36.288 "strip_size_kb": 64, 00:07:36.288 "state": "online", 00:07:36.288 "raid_level": "concat", 00:07:36.288 "superblock": true, 00:07:36.288 "num_base_bdevs": 2, 00:07:36.288 "num_base_bdevs_discovered": 2, 00:07:36.288 "num_base_bdevs_operational": 2, 00:07:36.288 "base_bdevs_list": [ 00:07:36.288 { 00:07:36.288 "name": "pt1", 00:07:36.288 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.288 "is_configured": true, 00:07:36.288 "data_offset": 2048, 00:07:36.288 "data_size": 63488 00:07:36.288 }, 00:07:36.288 { 00:07:36.288 "name": "pt2", 00:07:36.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.288 "is_configured": true, 00:07:36.288 "data_offset": 2048, 00:07:36.288 "data_size": 63488 00:07:36.288 } 00:07:36.288 ] 00:07:36.288 } 00:07:36.288 } 00:07:36.288 }' 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:36.288 pt2' 00:07:36.288 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:36.548 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:36.548 10:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:36.806 "name": "pt1", 00:07:36.806 "aliases": [ 00:07:36.806 "00000000-0000-0000-0000-000000000001" 00:07:36.806 ], 00:07:36.806 "product_name": "passthru", 00:07:36.806 "block_size": 512, 00:07:36.806 "num_blocks": 65536, 00:07:36.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.806 "assigned_rate_limits": { 00:07:36.806 "rw_ios_per_sec": 0, 00:07:36.806 "rw_mbytes_per_sec": 0, 00:07:36.806 "r_mbytes_per_sec": 0, 00:07:36.806 "w_mbytes_per_sec": 0 00:07:36.806 }, 00:07:36.806 "claimed": true, 00:07:36.806 "claim_type": "exclusive_write", 00:07:36.806 "zoned": false, 00:07:36.806 "supported_io_types": { 00:07:36.806 "read": true, 00:07:36.806 "write": true, 00:07:36.806 "unmap": true, 00:07:36.806 "write_zeroes": true, 00:07:36.806 "flush": true, 00:07:36.806 "reset": true, 00:07:36.806 "compare": false, 00:07:36.806 "compare_and_write": false, 00:07:36.806 "abort": true, 00:07:36.806 "nvme_admin": false, 00:07:36.806 "nvme_io": false 00:07:36.806 }, 00:07:36.806 "memory_domains": [ 00:07:36.806 { 00:07:36.806 "dma_device_id": "system", 00:07:36.806 "dma_device_type": 1 00:07:36.806 }, 00:07:36.806 { 00:07:36.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.806 "dma_device_type": 2 00:07:36.806 } 00:07:36.806 ], 00:07:36.806 "driver_specific": { 00:07:36.806 "passthru": { 00:07:36.806 "name": "pt1", 00:07:36.806 "base_bdev_name": "malloc1" 00:07:36.806 } 00:07:36.806 } 00:07:36.806 }' 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:36.806 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:36.807 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.807 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:36.807 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:36.807 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:36.807 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:36.807 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:37.065 "name": "pt2", 00:07:37.065 "aliases": [ 00:07:37.065 "00000000-0000-0000-0000-000000000002" 00:07:37.065 ], 00:07:37.065 "product_name": "passthru", 00:07:37.065 "block_size": 512, 00:07:37.065 "num_blocks": 65536, 00:07:37.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.065 "assigned_rate_limits": { 00:07:37.065 "rw_ios_per_sec": 0, 00:07:37.065 "rw_mbytes_per_sec": 0, 00:07:37.065 "r_mbytes_per_sec": 0, 00:07:37.065 "w_mbytes_per_sec": 0 00:07:37.065 }, 00:07:37.065 "claimed": true, 00:07:37.065 "claim_type": "exclusive_write", 00:07:37.065 "zoned": false, 00:07:37.065 "supported_io_types": { 00:07:37.065 "read": true, 00:07:37.065 "write": true, 00:07:37.065 "unmap": true, 00:07:37.065 "write_zeroes": true, 00:07:37.065 "flush": true, 00:07:37.065 "reset": true, 00:07:37.065 "compare": false, 00:07:37.065 "compare_and_write": false, 00:07:37.065 "abort": true, 00:07:37.065 "nvme_admin": false, 00:07:37.065 "nvme_io": false 00:07:37.065 }, 00:07:37.065 "memory_domains": [ 00:07:37.065 { 00:07:37.065 "dma_device_id": "system", 00:07:37.065 "dma_device_type": 1 00:07:37.065 }, 00:07:37.065 { 00:07:37.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.065 "dma_device_type": 2 00:07:37.065 } 00:07:37.065 ], 00:07:37.065 "driver_specific": { 00:07:37.065 "passthru": { 00:07:37.065 "name": "pt2", 00:07:37.065 "base_bdev_name": "malloc2" 00:07:37.065 } 00:07:37.065 } 00:07:37.065 }' 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:37.065 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:37.324 [2024-06-10 10:11:42.908065] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.583 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d4cf9836-2711-11ef-b084-113036b5c18d 00:07:37.583 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d4cf9836-2711-11ef-b084-113036b5c18d ']' 00:07:37.583 10:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:37.583 [2024-06-10 10:11:43.180089] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.583 [2024-06-10 10:11:43.180116] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.583 [2024-06-10 10:11:43.180140] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.583 [2024-06-10 10:11:43.180156] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.583 [2024-06-10 10:11:43.180164] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82afb2f00 name raid_bdev1, state offline 00:07:37.841 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.841 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:38.100 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:38.100 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:38.100 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.100 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:38.358 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:38.358 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:38.617 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:38.617 10:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.617 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:39.241 [2024-06-10 10:11:44.544462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:39.241 [2024-06-10 10:11:44.544913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:39.241 [2024-06-10 10:11:44.544936] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:39.241 [2024-06-10 10:11:44.544978] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:39.241 [2024-06-10 10:11:44.544995] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.241 [2024-06-10 10:11:44.545003] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82afb2c80 name raid_bdev1, state configuring 00:07:39.241 request: 00:07:39.241 { 00:07:39.241 "name": "raid_bdev1", 00:07:39.241 "raid_level": "concat", 00:07:39.241 "base_bdevs": [ 00:07:39.241 "malloc1", 00:07:39.241 "malloc2" 00:07:39.241 ], 00:07:39.241 "superblock": false, 00:07:39.241 "strip_size_kb": 64, 00:07:39.241 "method": "bdev_raid_create", 00:07:39.241 "req_id": 1 00:07:39.241 } 00:07:39.241 Got JSON-RPC error response 00:07:39.241 response: 00:07:39.241 { 00:07:39.241 "code": -17, 00:07:39.241 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:39.241 } 00:07:39.241 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:07:39.241 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:39.241 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:39.241 10:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:39.241 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.241 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:39.501 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:39.501 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:39.501 10:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:39.760 [2024-06-10 10:11:45.120588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:39.760 [2024-06-10 10:11:45.120642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.760 [2024-06-10 10:11:45.120654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82afb2780 00:07:39.760 [2024-06-10 10:11:45.120661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.760 [2024-06-10 10:11:45.121170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.760 [2024-06-10 10:11:45.121198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:39.760 [2024-06-10 10:11:45.121221] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:39.760 [2024-06-10 10:11:45.121231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:39.760 pt1 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:39.760 "name": "raid_bdev1", 00:07:39.760 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:39.760 "strip_size_kb": 64, 00:07:39.760 "state": "configuring", 00:07:39.760 "raid_level": "concat", 00:07:39.760 "superblock": true, 00:07:39.760 "num_base_bdevs": 2, 00:07:39.760 "num_base_bdevs_discovered": 1, 00:07:39.760 "num_base_bdevs_operational": 2, 00:07:39.760 "base_bdevs_list": [ 00:07:39.760 { 00:07:39.760 "name": "pt1", 00:07:39.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:39.760 "is_configured": true, 00:07:39.760 "data_offset": 2048, 00:07:39.760 "data_size": 63488 00:07:39.760 }, 00:07:39.760 { 00:07:39.760 "name": null, 00:07:39.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:39.760 "is_configured": false, 00:07:39.760 "data_offset": 2048, 00:07:39.760 "data_size": 63488 00:07:39.760 } 00:07:39.760 ] 00:07:39.760 }' 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:39.760 10:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.328 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:40.328 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:40.328 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:40.328 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:40.588 [2024-06-10 10:11:45.976814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:40.588 [2024-06-10 10:11:45.976879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.588 [2024-06-10 10:11:45.976897] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82afb2f00 00:07:40.588 [2024-06-10 10:11:45.976935] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.588 [2024-06-10 10:11:45.977058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.588 [2024-06-10 10:11:45.977075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:40.588 [2024-06-10 10:11:45.977113] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:40.588 [2024-06-10 10:11:45.977126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:40.588 [2024-06-10 10:11:45.977160] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82afb3180 00:07:40.588 [2024-06-10 10:11:45.977169] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.588 [2024-06-10 10:11:45.977217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b015e20 00:07:40.588 [2024-06-10 10:11:45.977262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82afb3180 00:07:40.588 [2024-06-10 10:11:45.977266] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82afb3180 00:07:40.588 [2024-06-10 10:11:45.977284] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.588 pt2 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.588 10:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.847 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:40.847 "name": "raid_bdev1", 00:07:40.847 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:40.847 "strip_size_kb": 64, 00:07:40.847 "state": "online", 00:07:40.847 "raid_level": "concat", 00:07:40.847 "superblock": true, 00:07:40.847 "num_base_bdevs": 2, 00:07:40.847 "num_base_bdevs_discovered": 2, 00:07:40.847 "num_base_bdevs_operational": 2, 00:07:40.847 "base_bdevs_list": [ 00:07:40.847 { 00:07:40.847 "name": "pt1", 00:07:40.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:40.847 "is_configured": true, 00:07:40.847 "data_offset": 2048, 00:07:40.847 "data_size": 63488 00:07:40.847 }, 00:07:40.847 { 00:07:40.847 "name": "pt2", 00:07:40.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:40.847 "is_configured": true, 00:07:40.847 "data_offset": 2048, 00:07:40.847 "data_size": 63488 00:07:40.847 } 00:07:40.847 ] 00:07:40.847 }' 00:07:40.847 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:40.847 10:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:41.105 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:41.364 [2024-06-10 10:11:46.724928] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.364 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:41.364 "name": "raid_bdev1", 00:07:41.364 "aliases": [ 00:07:41.364 "d4cf9836-2711-11ef-b084-113036b5c18d" 00:07:41.364 ], 00:07:41.364 "product_name": "Raid Volume", 00:07:41.364 "block_size": 512, 00:07:41.364 "num_blocks": 126976, 00:07:41.364 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:41.364 "assigned_rate_limits": { 00:07:41.364 "rw_ios_per_sec": 0, 00:07:41.364 "rw_mbytes_per_sec": 0, 00:07:41.364 "r_mbytes_per_sec": 0, 00:07:41.364 "w_mbytes_per_sec": 0 00:07:41.364 }, 00:07:41.364 "claimed": false, 00:07:41.364 "zoned": false, 00:07:41.364 "supported_io_types": { 00:07:41.364 "read": true, 00:07:41.364 "write": true, 00:07:41.364 "unmap": true, 00:07:41.364 "write_zeroes": true, 00:07:41.364 "flush": true, 00:07:41.364 "reset": true, 00:07:41.364 "compare": false, 00:07:41.364 "compare_and_write": false, 00:07:41.364 "abort": false, 00:07:41.364 "nvme_admin": false, 00:07:41.364 "nvme_io": false 00:07:41.364 }, 00:07:41.364 "memory_domains": [ 00:07:41.364 { 00:07:41.364 "dma_device_id": "system", 00:07:41.364 "dma_device_type": 1 00:07:41.364 }, 00:07:41.364 { 00:07:41.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.364 "dma_device_type": 2 00:07:41.364 }, 00:07:41.364 { 00:07:41.364 "dma_device_id": "system", 00:07:41.364 "dma_device_type": 1 00:07:41.364 }, 00:07:41.364 { 00:07:41.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.364 "dma_device_type": 2 00:07:41.364 } 00:07:41.364 ], 00:07:41.364 "driver_specific": { 00:07:41.364 "raid": { 00:07:41.364 "uuid": "d4cf9836-2711-11ef-b084-113036b5c18d", 00:07:41.364 "strip_size_kb": 64, 00:07:41.364 "state": "online", 00:07:41.364 "raid_level": "concat", 00:07:41.364 "superblock": true, 00:07:41.364 "num_base_bdevs": 2, 00:07:41.364 "num_base_bdevs_discovered": 2, 00:07:41.364 "num_base_bdevs_operational": 2, 00:07:41.364 "base_bdevs_list": [ 00:07:41.364 { 00:07:41.364 "name": "pt1", 00:07:41.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.364 "is_configured": true, 00:07:41.364 "data_offset": 2048, 00:07:41.364 "data_size": 63488 00:07:41.364 }, 00:07:41.364 { 00:07:41.364 "name": "pt2", 00:07:41.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.364 "is_configured": true, 00:07:41.364 "data_offset": 2048, 00:07:41.364 "data_size": 63488 00:07:41.364 } 00:07:41.364 ] 00:07:41.364 } 00:07:41.364 } 00:07:41.364 }' 00:07:41.364 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.364 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:41.364 pt2' 00:07:41.364 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:41.364 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:41.364 10:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:41.624 "name": "pt1", 00:07:41.624 "aliases": [ 00:07:41.624 "00000000-0000-0000-0000-000000000001" 00:07:41.624 ], 00:07:41.624 "product_name": "passthru", 00:07:41.624 "block_size": 512, 00:07:41.624 "num_blocks": 65536, 00:07:41.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.624 "assigned_rate_limits": { 00:07:41.624 "rw_ios_per_sec": 0, 00:07:41.624 "rw_mbytes_per_sec": 0, 00:07:41.624 "r_mbytes_per_sec": 0, 00:07:41.624 "w_mbytes_per_sec": 0 00:07:41.624 }, 00:07:41.624 "claimed": true, 00:07:41.624 "claim_type": "exclusive_write", 00:07:41.624 "zoned": false, 00:07:41.624 "supported_io_types": { 00:07:41.624 "read": true, 00:07:41.624 "write": true, 00:07:41.624 "unmap": true, 00:07:41.624 "write_zeroes": true, 00:07:41.624 "flush": true, 00:07:41.624 "reset": true, 00:07:41.624 "compare": false, 00:07:41.624 "compare_and_write": false, 00:07:41.624 "abort": true, 00:07:41.624 "nvme_admin": false, 00:07:41.624 "nvme_io": false 00:07:41.624 }, 00:07:41.624 "memory_domains": [ 00:07:41.624 { 00:07:41.624 "dma_device_id": "system", 00:07:41.624 "dma_device_type": 1 00:07:41.624 }, 00:07:41.624 { 00:07:41.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.624 "dma_device_type": 2 00:07:41.624 } 00:07:41.624 ], 00:07:41.624 "driver_specific": { 00:07:41.624 "passthru": { 00:07:41.624 "name": "pt1", 00:07:41.624 "base_bdev_name": "malloc1" 00:07:41.624 } 00:07:41.624 } 00:07:41.624 }' 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:41.624 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:41.883 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:41.883 "name": "pt2", 00:07:41.883 "aliases": [ 00:07:41.883 "00000000-0000-0000-0000-000000000002" 00:07:41.883 ], 00:07:41.883 "product_name": "passthru", 00:07:41.883 "block_size": 512, 00:07:41.883 "num_blocks": 65536, 00:07:41.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.883 "assigned_rate_limits": { 00:07:41.884 "rw_ios_per_sec": 0, 00:07:41.884 "rw_mbytes_per_sec": 0, 00:07:41.884 "r_mbytes_per_sec": 0, 00:07:41.884 "w_mbytes_per_sec": 0 00:07:41.884 }, 00:07:41.884 "claimed": true, 00:07:41.884 "claim_type": "exclusive_write", 00:07:41.884 "zoned": false, 00:07:41.884 "supported_io_types": { 00:07:41.884 "read": true, 00:07:41.884 "write": true, 00:07:41.884 "unmap": true, 00:07:41.884 "write_zeroes": true, 00:07:41.884 "flush": true, 00:07:41.884 "reset": true, 00:07:41.884 "compare": false, 00:07:41.884 "compare_and_write": false, 00:07:41.884 "abort": true, 00:07:41.884 "nvme_admin": false, 00:07:41.884 "nvme_io": false 00:07:41.884 }, 00:07:41.884 "memory_domains": [ 00:07:41.884 { 00:07:41.884 "dma_device_id": "system", 00:07:41.884 "dma_device_type": 1 00:07:41.884 }, 00:07:41.884 { 00:07:41.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.884 "dma_device_type": 2 00:07:41.884 } 00:07:41.884 ], 00:07:41.884 "driver_specific": { 00:07:41.884 "passthru": { 00:07:41.884 "name": "pt2", 00:07:41.884 "base_bdev_name": "malloc2" 00:07:41.884 } 00:07:41.884 } 00:07:41.884 }' 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:41.884 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:42.144 [2024-06-10 10:11:47.685100] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d4cf9836-2711-11ef-b084-113036b5c18d '!=' d4cf9836-2711-11ef-b084-113036b5c18d ']' 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51015 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 51015 ']' 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 51015 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 51015 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:07:42.144 killing process with pid 51015 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 51015' 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 51015 00:07:42.144 [2024-06-10 10:11:47.716792] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.144 [2024-06-10 10:11:47.716833] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.144 [2024-06-10 10:11:47.716852] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.144 [2024-06-10 10:11:47.716862] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82afb3180 name raid_bdev1, state offline 00:07:42.144 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 51015 00:07:42.144 [2024-06-10 10:11:47.726721] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.403 10:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:42.403 00:07:42.403 real 0m9.513s 00:07:42.403 user 0m16.766s 00:07:42.403 sys 0m1.541s 00:07:42.403 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:42.403 10:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.403 ************************************ 00:07:42.403 END TEST raid_superblock_test 00:07:42.403 ************************************ 00:07:42.403 10:11:47 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:42.403 10:11:47 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:42.403 10:11:47 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:42.403 10:11:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.403 ************************************ 00:07:42.403 START TEST raid_read_error_test 00:07:42.403 ************************************ 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 2 read 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0J1O8TRj 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51284 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51284 /var/tmp/spdk-raid.sock 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 51284 ']' 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:42.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:42.403 10:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.403 [2024-06-10 10:11:47.958306] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:42.403 [2024-06-10 10:11:47.958491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:43.005 EAL: TSC is not safe to use in SMP mode 00:07:43.005 EAL: TSC is not invariant 00:07:43.005 [2024-06-10 10:11:48.440081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.005 [2024-06-10 10:11:48.519136] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:43.005 [2024-06-10 10:11:48.521292] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.005 [2024-06-10 10:11:48.521985] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.005 [2024-06-10 10:11:48.521998] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.571 10:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:43.571 10:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:07:43.571 10:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:43.571 10:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.829 BaseBdev1_malloc 00:07:43.829 10:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:43.829 true 00:07:43.829 10:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:44.088 [2024-06-10 10:11:49.596496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:44.088 [2024-06-10 10:11:49.596556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.088 [2024-06-10 10:11:49.596582] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ce86780 00:07:44.088 [2024-06-10 10:11:49.596589] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.088 [2024-06-10 10:11:49.597084] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.088 [2024-06-10 10:11:49.597106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:44.088 BaseBdev1 00:07:44.088 10:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:44.088 10:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:44.347 BaseBdev2_malloc 00:07:44.347 10:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:44.605 true 00:07:44.605 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:44.864 [2024-06-10 10:11:50.268680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:44.864 [2024-06-10 10:11:50.268748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.864 [2024-06-10 10:11:50.268782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ce86c80 00:07:44.864 [2024-06-10 10:11:50.268793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.864 [2024-06-10 10:11:50.269385] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.864 [2024-06-10 10:11:50.269422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:44.864 BaseBdev2 00:07:44.864 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:45.123 [2024-06-10 10:11:50.532729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.123 [2024-06-10 10:11:50.533212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.124 [2024-06-10 10:11:50.533272] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ce86f00 00:07:45.124 [2024-06-10 10:11:50.533277] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:45.124 [2024-06-10 10:11:50.533307] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cef2e20 00:07:45.124 [2024-06-10 10:11:50.533365] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ce86f00 00:07:45.124 [2024-06-10 10:11:50.533369] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82ce86f00 00:07:45.124 [2024-06-10 10:11:50.533391] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:45.124 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:45.125 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:45.125 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:45.125 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:45.125 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.125 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.386 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:45.386 "name": "raid_bdev1", 00:07:45.386 "uuid": "da7d0058-2711-11ef-b084-113036b5c18d", 00:07:45.386 "strip_size_kb": 64, 00:07:45.386 "state": "online", 00:07:45.386 "raid_level": "concat", 00:07:45.386 "superblock": true, 00:07:45.386 "num_base_bdevs": 2, 00:07:45.386 "num_base_bdevs_discovered": 2, 00:07:45.386 "num_base_bdevs_operational": 2, 00:07:45.386 "base_bdevs_list": [ 00:07:45.386 { 00:07:45.386 "name": "BaseBdev1", 00:07:45.386 "uuid": "b4d2a20b-0183-a253-95a7-a7ba4d4430d1", 00:07:45.386 "is_configured": true, 00:07:45.386 "data_offset": 2048, 00:07:45.386 "data_size": 63488 00:07:45.386 }, 00:07:45.386 { 00:07:45.386 "name": "BaseBdev2", 00:07:45.386 "uuid": "0da31e83-b884-8c54-9821-9c2231cb2fc0", 00:07:45.386 "is_configured": true, 00:07:45.386 "data_offset": 2048, 00:07:45.386 "data_size": 63488 00:07:45.386 } 00:07:45.386 ] 00:07:45.386 }' 00:07:45.386 10:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:45.386 10:11:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.645 10:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:45.645 10:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:45.645 [2024-06-10 10:11:51.120939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cef2ec0 00:07:46.615 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.874 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.133 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.133 "name": "raid_bdev1", 00:07:47.133 "uuid": "da7d0058-2711-11ef-b084-113036b5c18d", 00:07:47.133 "strip_size_kb": 64, 00:07:47.133 "state": "online", 00:07:47.133 "raid_level": "concat", 00:07:47.133 "superblock": true, 00:07:47.133 "num_base_bdevs": 2, 00:07:47.133 "num_base_bdevs_discovered": 2, 00:07:47.133 "num_base_bdevs_operational": 2, 00:07:47.133 "base_bdevs_list": [ 00:07:47.133 { 00:07:47.133 "name": "BaseBdev1", 00:07:47.133 "uuid": "b4d2a20b-0183-a253-95a7-a7ba4d4430d1", 00:07:47.133 "is_configured": true, 00:07:47.133 "data_offset": 2048, 00:07:47.133 "data_size": 63488 00:07:47.133 }, 00:07:47.134 { 00:07:47.134 "name": "BaseBdev2", 00:07:47.134 "uuid": "0da31e83-b884-8c54-9821-9c2231cb2fc0", 00:07:47.134 "is_configured": true, 00:07:47.134 "data_offset": 2048, 00:07:47.134 "data_size": 63488 00:07:47.134 } 00:07:47.134 ] 00:07:47.134 }' 00:07:47.134 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.134 10:11:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.393 10:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:47.651 [2024-06-10 10:11:53.198168] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.651 [2024-06-10 10:11:53.198197] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.651 [2024-06-10 10:11:53.198469] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.651 [2024-06-10 10:11:53.198478] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.651 [2024-06-10 10:11:53.198483] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.651 [2024-06-10 10:11:53.198486] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ce86f00 name raid_bdev1, state offline 00:07:47.651 0 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51284 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 51284 ']' 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 51284 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 51284 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:07:47.651 killing process with pid 51284 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 51284' 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 51284 00:07:47.651 [2024-06-10 10:11:53.227802] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.651 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 51284 00:07:47.651 [2024-06-10 10:11:53.237413] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0J1O8TRj 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:07:47.910 00:07:47.910 real 0m5.468s 00:07:47.910 user 0m8.165s 00:07:47.910 sys 0m1.065s 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:47.910 ************************************ 00:07:47.910 END TEST raid_read_error_test 00:07:47.910 ************************************ 00:07:47.910 10:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.910 10:11:53 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:47.910 10:11:53 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:47.910 10:11:53 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:47.910 10:11:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.910 ************************************ 00:07:47.910 START TEST raid_write_error_test 00:07:47.910 ************************************ 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 2 write 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0vWHb532 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51408 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51408 /var/tmp/spdk-raid.sock 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 51408 ']' 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:47.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:47.910 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:47.911 10:11:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.911 [2024-06-10 10:11:53.474603] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:47.911 [2024-06-10 10:11:53.474860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:48.479 EAL: TSC is not safe to use in SMP mode 00:07:48.479 EAL: TSC is not invariant 00:07:48.479 [2024-06-10 10:11:53.931583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.479 [2024-06-10 10:11:54.008377] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:48.479 [2024-06-10 10:11:54.010520] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.479 [2024-06-10 10:11:54.011207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.479 [2024-06-10 10:11:54.011219] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:49.446 10:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:49.446 10:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:07:49.446 10:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:49.446 10:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:49.446 BaseBdev1_malloc 00:07:49.446 10:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:49.705 true 00:07:49.705 10:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:49.964 [2024-06-10 10:11:55.473551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:49.964 [2024-06-10 10:11:55.473613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:49.964 [2024-06-10 10:11:55.473639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a90c780 00:07:49.964 [2024-06-10 10:11:55.473647] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:49.964 [2024-06-10 10:11:55.474166] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:49.964 [2024-06-10 10:11:55.474194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:49.964 BaseBdev1 00:07:49.964 10:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:49.964 10:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.223 BaseBdev2_malloc 00:07:50.223 10:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:50.482 true 00:07:50.482 10:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.742 [2024-06-10 10:11:56.177589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.742 [2024-06-10 10:11:56.177671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.742 [2024-06-10 10:11:56.177697] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a90cc80 00:07:50.742 [2024-06-10 10:11:56.177704] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.742 [2024-06-10 10:11:56.178246] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.742 [2024-06-10 10:11:56.178276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.742 BaseBdev2 00:07:50.742 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:51.002 [2024-06-10 10:11:56.365634] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.002 [2024-06-10 10:11:56.366093] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.002 [2024-06-10 10:11:56.366161] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a90cf00 00:07:51.002 [2024-06-10 10:11:56.366165] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.002 [2024-06-10 10:11:56.366193] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a978e20 00:07:51.002 [2024-06-10 10:11:56.366247] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a90cf00 00:07:51.002 [2024-06-10 10:11:56.366250] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a90cf00 00:07:51.002 [2024-06-10 10:11:56.366271] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.002 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.260 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:51.260 "name": "raid_bdev1", 00:07:51.260 "uuid": "ddf70837-2711-11ef-b084-113036b5c18d", 00:07:51.260 "strip_size_kb": 64, 00:07:51.260 "state": "online", 00:07:51.260 "raid_level": "concat", 00:07:51.261 "superblock": true, 00:07:51.261 "num_base_bdevs": 2, 00:07:51.261 "num_base_bdevs_discovered": 2, 00:07:51.261 "num_base_bdevs_operational": 2, 00:07:51.261 "base_bdevs_list": [ 00:07:51.261 { 00:07:51.261 "name": "BaseBdev1", 00:07:51.261 "uuid": "94b163c1-3feb-515e-8c9e-f48dc4d9cd95", 00:07:51.261 "is_configured": true, 00:07:51.261 "data_offset": 2048, 00:07:51.261 "data_size": 63488 00:07:51.261 }, 00:07:51.261 { 00:07:51.261 "name": "BaseBdev2", 00:07:51.261 "uuid": "cfafdd07-32d8-b856-b1d5-8aba32cf8f47", 00:07:51.261 "is_configured": true, 00:07:51.261 "data_offset": 2048, 00:07:51.261 "data_size": 63488 00:07:51.261 } 00:07:51.261 ] 00:07:51.261 }' 00:07:51.261 10:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:51.261 10:11:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.519 10:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:51.519 10:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:51.778 [2024-06-10 10:11:57.141821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a978ec0 00:07:52.715 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:52.975 "name": "raid_bdev1", 00:07:52.975 "uuid": "ddf70837-2711-11ef-b084-113036b5c18d", 00:07:52.975 "strip_size_kb": 64, 00:07:52.975 "state": "online", 00:07:52.975 "raid_level": "concat", 00:07:52.975 "superblock": true, 00:07:52.975 "num_base_bdevs": 2, 00:07:52.975 "num_base_bdevs_discovered": 2, 00:07:52.975 "num_base_bdevs_operational": 2, 00:07:52.975 "base_bdevs_list": [ 00:07:52.975 { 00:07:52.975 "name": "BaseBdev1", 00:07:52.975 "uuid": "94b163c1-3feb-515e-8c9e-f48dc4d9cd95", 00:07:52.975 "is_configured": true, 00:07:52.975 "data_offset": 2048, 00:07:52.975 "data_size": 63488 00:07:52.975 }, 00:07:52.975 { 00:07:52.975 "name": "BaseBdev2", 00:07:52.975 "uuid": "cfafdd07-32d8-b856-b1d5-8aba32cf8f47", 00:07:52.975 "is_configured": true, 00:07:52.975 "data_offset": 2048, 00:07:52.975 "data_size": 63488 00:07:52.975 } 00:07:52.975 ] 00:07:52.975 }' 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:52.975 10:11:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.544 10:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:53.544 [2024-06-10 10:11:59.068114] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.544 [2024-06-10 10:11:59.068146] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.544 [2024-06-10 10:11:59.068586] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.544 [2024-06-10 10:11:59.068600] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.544 [2024-06-10 10:11:59.068611] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.544 [2024-06-10 10:11:59.068619] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a90cf00 name raid_bdev1, state offline 00:07:53.544 0 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51408 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 51408 ']' 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 51408 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 51408 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:07:53.544 killing process with pid 51408 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 51408' 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 51408 00:07:53.544 [2024-06-10 10:11:59.096159] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.544 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 51408 00:07:53.544 [2024-06-10 10:11:59.105648] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0vWHb532 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:07:53.803 00:07:53.803 real 0m5.825s 00:07:53.803 user 0m9.002s 00:07:53.803 sys 0m0.951s 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:53.803 10:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.803 ************************************ 00:07:53.803 END TEST raid_write_error_test 00:07:53.803 ************************************ 00:07:53.803 10:11:59 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:53.803 10:11:59 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:53.803 10:11:59 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:07:53.803 10:11:59 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:53.803 10:11:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.803 ************************************ 00:07:53.803 START TEST raid_state_function_test 00:07:53.803 ************************************ 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 false 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51534 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51534' 00:07:53.803 Process raid pid: 51534 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51534 /var/tmp/spdk-raid.sock 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 51534 ']' 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:53.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:53.803 10:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.803 [2024-06-10 10:11:59.337556] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:07:53.803 [2024-06-10 10:11:59.337742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:54.369 EAL: TSC is not safe to use in SMP mode 00:07:54.369 EAL: TSC is not invariant 00:07:54.369 [2024-06-10 10:11:59.812025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.369 [2024-06-10 10:11:59.887603] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:54.369 [2024-06-10 10:11:59.889731] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.370 [2024-06-10 10:11:59.890386] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.370 [2024-06-10 10:11:59.890398] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.934 10:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:54.934 10:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:07:54.934 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:55.192 [2024-06-10 10:12:00.588659] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.192 [2024-06-10 10:12:00.588718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.192 [2024-06-10 10:12:00.588723] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.192 [2024-06-10 10:12:00.588730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:55.192 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.193 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.451 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:55.451 "name": "Existed_Raid", 00:07:55.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.451 "strip_size_kb": 0, 00:07:55.451 "state": "configuring", 00:07:55.451 "raid_level": "raid1", 00:07:55.451 "superblock": false, 00:07:55.451 "num_base_bdevs": 2, 00:07:55.451 "num_base_bdevs_discovered": 0, 00:07:55.451 "num_base_bdevs_operational": 2, 00:07:55.451 "base_bdevs_list": [ 00:07:55.451 { 00:07:55.451 "name": "BaseBdev1", 00:07:55.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.451 "is_configured": false, 00:07:55.451 "data_offset": 0, 00:07:55.451 "data_size": 0 00:07:55.451 }, 00:07:55.451 { 00:07:55.451 "name": "BaseBdev2", 00:07:55.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.451 "is_configured": false, 00:07:55.451 "data_offset": 0, 00:07:55.451 "data_size": 0 00:07:55.451 } 00:07:55.451 ] 00:07:55.451 }' 00:07:55.451 10:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:55.451 10:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.708 10:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:55.966 [2024-06-10 10:12:01.320753] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.966 [2024-06-10 10:12:01.320778] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c85d500 name Existed_Raid, state configuring 00:07:55.966 10:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:56.224 [2024-06-10 10:12:01.584809] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.224 [2024-06-10 10:12:01.584869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.224 [2024-06-10 10:12:01.584873] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.224 [2024-06-10 10:12:01.584880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:56.224 [2024-06-10 10:12:01.765798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.224 BaseBdev1 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:07:56.224 10:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:56.789 10:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:56.789 [ 00:07:56.789 { 00:07:56.789 "name": "BaseBdev1", 00:07:56.789 "aliases": [ 00:07:56.789 "e12ee2da-2711-11ef-b084-113036b5c18d" 00:07:56.789 ], 00:07:56.789 "product_name": "Malloc disk", 00:07:56.789 "block_size": 512, 00:07:56.789 "num_blocks": 65536, 00:07:56.789 "uuid": "e12ee2da-2711-11ef-b084-113036b5c18d", 00:07:56.789 "assigned_rate_limits": { 00:07:56.789 "rw_ios_per_sec": 0, 00:07:56.789 "rw_mbytes_per_sec": 0, 00:07:56.789 "r_mbytes_per_sec": 0, 00:07:56.789 "w_mbytes_per_sec": 0 00:07:56.789 }, 00:07:56.789 "claimed": true, 00:07:56.789 "claim_type": "exclusive_write", 00:07:56.789 "zoned": false, 00:07:56.789 "supported_io_types": { 00:07:56.789 "read": true, 00:07:56.789 "write": true, 00:07:56.789 "unmap": true, 00:07:56.789 "write_zeroes": true, 00:07:56.789 "flush": true, 00:07:56.789 "reset": true, 00:07:56.789 "compare": false, 00:07:56.789 "compare_and_write": false, 00:07:56.789 "abort": true, 00:07:56.789 "nvme_admin": false, 00:07:56.789 "nvme_io": false 00:07:56.789 }, 00:07:56.789 "memory_domains": [ 00:07:56.789 { 00:07:56.789 "dma_device_id": "system", 00:07:56.789 "dma_device_type": 1 00:07:56.789 }, 00:07:56.789 { 00:07:56.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.789 "dma_device_type": 2 00:07:56.789 } 00:07:56.789 ], 00:07:56.789 "driver_specific": {} 00:07:56.789 } 00:07:56.789 ] 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.046 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.304 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:57.304 "name": "Existed_Raid", 00:07:57.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.304 "strip_size_kb": 0, 00:07:57.304 "state": "configuring", 00:07:57.304 "raid_level": "raid1", 00:07:57.304 "superblock": false, 00:07:57.304 "num_base_bdevs": 2, 00:07:57.304 "num_base_bdevs_discovered": 1, 00:07:57.304 "num_base_bdevs_operational": 2, 00:07:57.304 "base_bdevs_list": [ 00:07:57.304 { 00:07:57.304 "name": "BaseBdev1", 00:07:57.304 "uuid": "e12ee2da-2711-11ef-b084-113036b5c18d", 00:07:57.304 "is_configured": true, 00:07:57.304 "data_offset": 0, 00:07:57.304 "data_size": 65536 00:07:57.304 }, 00:07:57.304 { 00:07:57.304 "name": "BaseBdev2", 00:07:57.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.304 "is_configured": false, 00:07:57.304 "data_offset": 0, 00:07:57.304 "data_size": 0 00:07:57.304 } 00:07:57.304 ] 00:07:57.304 }' 00:07:57.304 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:57.304 10:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.562 10:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:57.820 [2024-06-10 10:12:03.221075] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.820 [2024-06-10 10:12:03.221105] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c85d500 name Existed_Raid, state configuring 00:07:57.820 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:58.078 [2024-06-10 10:12:03.469130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.078 [2024-06-10 10:12:03.469797] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.078 [2024-06-10 10:12:03.469837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.078 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.336 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:58.336 "name": "Existed_Raid", 00:07:58.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.337 "strip_size_kb": 0, 00:07:58.337 "state": "configuring", 00:07:58.337 "raid_level": "raid1", 00:07:58.337 "superblock": false, 00:07:58.337 "num_base_bdevs": 2, 00:07:58.337 "num_base_bdevs_discovered": 1, 00:07:58.337 "num_base_bdevs_operational": 2, 00:07:58.337 "base_bdevs_list": [ 00:07:58.337 { 00:07:58.337 "name": "BaseBdev1", 00:07:58.337 "uuid": "e12ee2da-2711-11ef-b084-113036b5c18d", 00:07:58.337 "is_configured": true, 00:07:58.337 "data_offset": 0, 00:07:58.337 "data_size": 65536 00:07:58.337 }, 00:07:58.337 { 00:07:58.337 "name": "BaseBdev2", 00:07:58.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.337 "is_configured": false, 00:07:58.337 "data_offset": 0, 00:07:58.337 "data_size": 0 00:07:58.337 } 00:07:58.337 ] 00:07:58.337 }' 00:07:58.337 10:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:58.337 10:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.595 10:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.872 [2024-06-10 10:12:04.393387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.872 [2024-06-10 10:12:04.393415] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c85da00 00:07:58.872 [2024-06-10 10:12:04.393419] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:58.872 [2024-06-10 10:12:04.393438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c8c0ec0 00:07:58.872 [2024-06-10 10:12:04.393512] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c85da00 00:07:58.872 [2024-06-10 10:12:04.393515] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c85da00 00:07:58.872 [2024-06-10 10:12:04.393540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.872 BaseBdev2 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:07:58.872 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:59.167 10:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:59.734 [ 00:07:59.734 { 00:07:59.734 "name": "BaseBdev2", 00:07:59.734 "aliases": [ 00:07:59.734 "e2bff46c-2711-11ef-b084-113036b5c18d" 00:07:59.734 ], 00:07:59.734 "product_name": "Malloc disk", 00:07:59.734 "block_size": 512, 00:07:59.734 "num_blocks": 65536, 00:07:59.734 "uuid": "e2bff46c-2711-11ef-b084-113036b5c18d", 00:07:59.734 "assigned_rate_limits": { 00:07:59.734 "rw_ios_per_sec": 0, 00:07:59.734 "rw_mbytes_per_sec": 0, 00:07:59.734 "r_mbytes_per_sec": 0, 00:07:59.735 "w_mbytes_per_sec": 0 00:07:59.735 }, 00:07:59.735 "claimed": true, 00:07:59.735 "claim_type": "exclusive_write", 00:07:59.735 "zoned": false, 00:07:59.735 "supported_io_types": { 00:07:59.735 "read": true, 00:07:59.735 "write": true, 00:07:59.735 "unmap": true, 00:07:59.735 "write_zeroes": true, 00:07:59.735 "flush": true, 00:07:59.735 "reset": true, 00:07:59.735 "compare": false, 00:07:59.735 "compare_and_write": false, 00:07:59.735 "abort": true, 00:07:59.735 "nvme_admin": false, 00:07:59.735 "nvme_io": false 00:07:59.735 }, 00:07:59.735 "memory_domains": [ 00:07:59.735 { 00:07:59.735 "dma_device_id": "system", 00:07:59.735 "dma_device_type": 1 00:07:59.735 }, 00:07:59.735 { 00:07:59.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.735 "dma_device_type": 2 00:07:59.735 } 00:07:59.735 ], 00:07:59.735 "driver_specific": {} 00:07:59.735 } 00:07:59.735 ] 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:59.735 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.993 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:59.993 "name": "Existed_Raid", 00:07:59.993 "uuid": "e2bffa26-2711-11ef-b084-113036b5c18d", 00:07:59.993 "strip_size_kb": 0, 00:07:59.993 "state": "online", 00:07:59.993 "raid_level": "raid1", 00:07:59.993 "superblock": false, 00:07:59.993 "num_base_bdevs": 2, 00:07:59.993 "num_base_bdevs_discovered": 2, 00:07:59.993 "num_base_bdevs_operational": 2, 00:07:59.993 "base_bdevs_list": [ 00:07:59.993 { 00:07:59.993 "name": "BaseBdev1", 00:07:59.993 "uuid": "e12ee2da-2711-11ef-b084-113036b5c18d", 00:07:59.993 "is_configured": true, 00:07:59.993 "data_offset": 0, 00:07:59.993 "data_size": 65536 00:07:59.993 }, 00:07:59.993 { 00:07:59.993 "name": "BaseBdev2", 00:07:59.993 "uuid": "e2bff46c-2711-11ef-b084-113036b5c18d", 00:07:59.993 "is_configured": true, 00:07:59.993 "data_offset": 0, 00:07:59.993 "data_size": 65536 00:07:59.993 } 00:07:59.993 ] 00:07:59.993 }' 00:07:59.993 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:59.993 10:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:00.251 10:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:00.510 [2024-06-10 10:12:06.057551] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.510 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:00.510 "name": "Existed_Raid", 00:08:00.510 "aliases": [ 00:08:00.510 "e2bffa26-2711-11ef-b084-113036b5c18d" 00:08:00.510 ], 00:08:00.510 "product_name": "Raid Volume", 00:08:00.510 "block_size": 512, 00:08:00.510 "num_blocks": 65536, 00:08:00.510 "uuid": "e2bffa26-2711-11ef-b084-113036b5c18d", 00:08:00.510 "assigned_rate_limits": { 00:08:00.510 "rw_ios_per_sec": 0, 00:08:00.510 "rw_mbytes_per_sec": 0, 00:08:00.510 "r_mbytes_per_sec": 0, 00:08:00.510 "w_mbytes_per_sec": 0 00:08:00.510 }, 00:08:00.510 "claimed": false, 00:08:00.510 "zoned": false, 00:08:00.510 "supported_io_types": { 00:08:00.510 "read": true, 00:08:00.510 "write": true, 00:08:00.510 "unmap": false, 00:08:00.510 "write_zeroes": true, 00:08:00.510 "flush": false, 00:08:00.510 "reset": true, 00:08:00.510 "compare": false, 00:08:00.510 "compare_and_write": false, 00:08:00.510 "abort": false, 00:08:00.510 "nvme_admin": false, 00:08:00.510 "nvme_io": false 00:08:00.510 }, 00:08:00.510 "memory_domains": [ 00:08:00.510 { 00:08:00.510 "dma_device_id": "system", 00:08:00.510 "dma_device_type": 1 00:08:00.510 }, 00:08:00.510 { 00:08:00.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.510 "dma_device_type": 2 00:08:00.510 }, 00:08:00.510 { 00:08:00.510 "dma_device_id": "system", 00:08:00.510 "dma_device_type": 1 00:08:00.510 }, 00:08:00.510 { 00:08:00.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.510 "dma_device_type": 2 00:08:00.510 } 00:08:00.510 ], 00:08:00.510 "driver_specific": { 00:08:00.510 "raid": { 00:08:00.510 "uuid": "e2bffa26-2711-11ef-b084-113036b5c18d", 00:08:00.510 "strip_size_kb": 0, 00:08:00.510 "state": "online", 00:08:00.510 "raid_level": "raid1", 00:08:00.510 "superblock": false, 00:08:00.510 "num_base_bdevs": 2, 00:08:00.510 "num_base_bdevs_discovered": 2, 00:08:00.510 "num_base_bdevs_operational": 2, 00:08:00.510 "base_bdevs_list": [ 00:08:00.510 { 00:08:00.510 "name": "BaseBdev1", 00:08:00.510 "uuid": "e12ee2da-2711-11ef-b084-113036b5c18d", 00:08:00.510 "is_configured": true, 00:08:00.510 "data_offset": 0, 00:08:00.510 "data_size": 65536 00:08:00.510 }, 00:08:00.510 { 00:08:00.510 "name": "BaseBdev2", 00:08:00.510 "uuid": "e2bff46c-2711-11ef-b084-113036b5c18d", 00:08:00.510 "is_configured": true, 00:08:00.510 "data_offset": 0, 00:08:00.510 "data_size": 65536 00:08:00.510 } 00:08:00.510 ] 00:08:00.510 } 00:08:00.510 } 00:08:00.510 }' 00:08:00.510 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.510 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:00.510 BaseBdev2' 00:08:00.510 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:00.510 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:00.510 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:01.080 "name": "BaseBdev1", 00:08:01.080 "aliases": [ 00:08:01.080 "e12ee2da-2711-11ef-b084-113036b5c18d" 00:08:01.080 ], 00:08:01.080 "product_name": "Malloc disk", 00:08:01.080 "block_size": 512, 00:08:01.080 "num_blocks": 65536, 00:08:01.080 "uuid": "e12ee2da-2711-11ef-b084-113036b5c18d", 00:08:01.080 "assigned_rate_limits": { 00:08:01.080 "rw_ios_per_sec": 0, 00:08:01.080 "rw_mbytes_per_sec": 0, 00:08:01.080 "r_mbytes_per_sec": 0, 00:08:01.080 "w_mbytes_per_sec": 0 00:08:01.080 }, 00:08:01.080 "claimed": true, 00:08:01.080 "claim_type": "exclusive_write", 00:08:01.080 "zoned": false, 00:08:01.080 "supported_io_types": { 00:08:01.080 "read": true, 00:08:01.080 "write": true, 00:08:01.080 "unmap": true, 00:08:01.080 "write_zeroes": true, 00:08:01.080 "flush": true, 00:08:01.080 "reset": true, 00:08:01.080 "compare": false, 00:08:01.080 "compare_and_write": false, 00:08:01.080 "abort": true, 00:08:01.080 "nvme_admin": false, 00:08:01.080 "nvme_io": false 00:08:01.080 }, 00:08:01.080 "memory_domains": [ 00:08:01.080 { 00:08:01.080 "dma_device_id": "system", 00:08:01.080 "dma_device_type": 1 00:08:01.080 }, 00:08:01.080 { 00:08:01.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.080 "dma_device_type": 2 00:08:01.080 } 00:08:01.080 ], 00:08:01.080 "driver_specific": {} 00:08:01.080 }' 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:01.080 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:01.339 "name": "BaseBdev2", 00:08:01.339 "aliases": [ 00:08:01.339 "e2bff46c-2711-11ef-b084-113036b5c18d" 00:08:01.339 ], 00:08:01.339 "product_name": "Malloc disk", 00:08:01.339 "block_size": 512, 00:08:01.339 "num_blocks": 65536, 00:08:01.339 "uuid": "e2bff46c-2711-11ef-b084-113036b5c18d", 00:08:01.339 "assigned_rate_limits": { 00:08:01.339 "rw_ios_per_sec": 0, 00:08:01.339 "rw_mbytes_per_sec": 0, 00:08:01.339 "r_mbytes_per_sec": 0, 00:08:01.339 "w_mbytes_per_sec": 0 00:08:01.339 }, 00:08:01.339 "claimed": true, 00:08:01.339 "claim_type": "exclusive_write", 00:08:01.339 "zoned": false, 00:08:01.339 "supported_io_types": { 00:08:01.339 "read": true, 00:08:01.339 "write": true, 00:08:01.339 "unmap": true, 00:08:01.339 "write_zeroes": true, 00:08:01.339 "flush": true, 00:08:01.339 "reset": true, 00:08:01.339 "compare": false, 00:08:01.339 "compare_and_write": false, 00:08:01.339 "abort": true, 00:08:01.339 "nvme_admin": false, 00:08:01.339 "nvme_io": false 00:08:01.339 }, 00:08:01.339 "memory_domains": [ 00:08:01.339 { 00:08:01.339 "dma_device_id": "system", 00:08:01.339 "dma_device_type": 1 00:08:01.339 }, 00:08:01.339 { 00:08:01.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.339 "dma_device_type": 2 00:08:01.339 } 00:08:01.339 ], 00:08:01.339 "driver_specific": {} 00:08:01.339 }' 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:01.339 10:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:01.598 [2024-06-10 10:12:07.093683] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.598 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.856 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:01.856 "name": "Existed_Raid", 00:08:01.856 "uuid": "e2bffa26-2711-11ef-b084-113036b5c18d", 00:08:01.856 "strip_size_kb": 0, 00:08:01.856 "state": "online", 00:08:01.856 "raid_level": "raid1", 00:08:01.856 "superblock": false, 00:08:01.856 "num_base_bdevs": 2, 00:08:01.856 "num_base_bdevs_discovered": 1, 00:08:01.856 "num_base_bdevs_operational": 1, 00:08:01.856 "base_bdevs_list": [ 00:08:01.856 { 00:08:01.856 "name": null, 00:08:01.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.856 "is_configured": false, 00:08:01.856 "data_offset": 0, 00:08:01.856 "data_size": 65536 00:08:01.856 }, 00:08:01.856 { 00:08:01.856 "name": "BaseBdev2", 00:08:01.857 "uuid": "e2bff46c-2711-11ef-b084-113036b5c18d", 00:08:01.857 "is_configured": true, 00:08:01.857 "data_offset": 0, 00:08:01.857 "data_size": 65536 00:08:01.857 } 00:08:01.857 ] 00:08:01.857 }' 00:08:01.857 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:01.857 10:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.115 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:02.115 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:02.115 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.115 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:02.372 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:02.372 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.372 10:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:02.631 [2024-06-10 10:12:08.074529] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.631 [2024-06-10 10:12:08.074588] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.631 [2024-06-10 10:12:08.080379] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.631 [2024-06-10 10:12:08.080413] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.631 [2024-06-10 10:12:08.080420] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c85da00 name Existed_Raid, state offline 00:08:02.631 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:02.631 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:02.631 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.631 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51534 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 51534 ']' 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 51534 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 51534 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:08:02.890 killing process with pid 51534 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 51534' 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 51534 00:08:02.890 [2024-06-10 10:12:08.315079] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.890 [2024-06-10 10:12:08.315124] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 51534 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:02.890 00:08:02.890 real 0m9.161s 00:08:02.890 user 0m16.015s 00:08:02.890 sys 0m1.580s 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:02.890 10:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.890 ************************************ 00:08:02.890 END TEST raid_state_function_test 00:08:02.890 ************************************ 00:08:03.149 10:12:08 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:03.149 10:12:08 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:08:03.149 10:12:08 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:03.149 10:12:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.149 ************************************ 00:08:03.149 START TEST raid_state_function_test_sb 00:08:03.149 ************************************ 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51805 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51805' 00:08:03.149 Process raid pid: 51805 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51805 /var/tmp/spdk-raid.sock 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 51805 ']' 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:03.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:03.149 10:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.149 [2024-06-10 10:12:08.550653] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:03.149 [2024-06-10 10:12:08.550831] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:03.781 EAL: TSC is not safe to use in SMP mode 00:08:03.781 EAL: TSC is not invariant 00:08:03.781 [2024-06-10 10:12:09.029784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.781 [2024-06-10 10:12:09.106775] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:03.781 [2024-06-10 10:12:09.109033] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.781 [2024-06-10 10:12:09.109714] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.781 [2024-06-10 10:12:09.109730] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.039 10:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:04.039 10:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:08:04.039 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:04.297 [2024-06-10 10:12:09.748177] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.297 [2024-06-10 10:12:09.748249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.297 [2024-06-10 10:12:09.748258] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.297 [2024-06-10 10:12:09.748274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.297 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.555 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:04.555 "name": "Existed_Raid", 00:08:04.555 "uuid": "e5f10bf6-2711-11ef-b084-113036b5c18d", 00:08:04.555 "strip_size_kb": 0, 00:08:04.555 "state": "configuring", 00:08:04.555 "raid_level": "raid1", 00:08:04.555 "superblock": true, 00:08:04.555 "num_base_bdevs": 2, 00:08:04.555 "num_base_bdevs_discovered": 0, 00:08:04.555 "num_base_bdevs_operational": 2, 00:08:04.555 "base_bdevs_list": [ 00:08:04.555 { 00:08:04.555 "name": "BaseBdev1", 00:08:04.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.555 "is_configured": false, 00:08:04.555 "data_offset": 0, 00:08:04.555 "data_size": 0 00:08:04.555 }, 00:08:04.555 { 00:08:04.555 "name": "BaseBdev2", 00:08:04.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.555 "is_configured": false, 00:08:04.555 "data_offset": 0, 00:08:04.555 "data_size": 0 00:08:04.555 } 00:08:04.555 ] 00:08:04.555 }' 00:08:04.555 10:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:04.555 10:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.814 10:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:05.073 [2024-06-10 10:12:10.464223] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.073 [2024-06-10 10:12:10.464246] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d081500 name Existed_Raid, state configuring 00:08:05.073 10:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:05.332 [2024-06-10 10:12:10.768271] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.332 [2024-06-10 10:12:10.768313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.332 [2024-06-10 10:12:10.768317] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.332 [2024-06-10 10:12:10.768325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.332 10:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.591 [2024-06-10 10:12:11.013170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.592 BaseBdev1 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:05.592 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:05.850 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.109 [ 00:08:06.109 { 00:08:06.109 "name": "BaseBdev1", 00:08:06.109 "aliases": [ 00:08:06.109 "e6b1f07d-2711-11ef-b084-113036b5c18d" 00:08:06.109 ], 00:08:06.109 "product_name": "Malloc disk", 00:08:06.109 "block_size": 512, 00:08:06.109 "num_blocks": 65536, 00:08:06.109 "uuid": "e6b1f07d-2711-11ef-b084-113036b5c18d", 00:08:06.109 "assigned_rate_limits": { 00:08:06.109 "rw_ios_per_sec": 0, 00:08:06.109 "rw_mbytes_per_sec": 0, 00:08:06.109 "r_mbytes_per_sec": 0, 00:08:06.109 "w_mbytes_per_sec": 0 00:08:06.109 }, 00:08:06.109 "claimed": true, 00:08:06.109 "claim_type": "exclusive_write", 00:08:06.109 "zoned": false, 00:08:06.109 "supported_io_types": { 00:08:06.109 "read": true, 00:08:06.109 "write": true, 00:08:06.109 "unmap": true, 00:08:06.109 "write_zeroes": true, 00:08:06.109 "flush": true, 00:08:06.109 "reset": true, 00:08:06.109 "compare": false, 00:08:06.109 "compare_and_write": false, 00:08:06.109 "abort": true, 00:08:06.109 "nvme_admin": false, 00:08:06.109 "nvme_io": false 00:08:06.109 }, 00:08:06.109 "memory_domains": [ 00:08:06.109 { 00:08:06.109 "dma_device_id": "system", 00:08:06.109 "dma_device_type": 1 00:08:06.109 }, 00:08:06.109 { 00:08:06.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.109 "dma_device_type": 2 00:08:06.109 } 00:08:06.109 ], 00:08:06.109 "driver_specific": {} 00:08:06.109 } 00:08:06.109 ] 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.109 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.369 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:06.369 "name": "Existed_Raid", 00:08:06.369 "uuid": "e68cb377-2711-11ef-b084-113036b5c18d", 00:08:06.369 "strip_size_kb": 0, 00:08:06.369 "state": "configuring", 00:08:06.369 "raid_level": "raid1", 00:08:06.369 "superblock": true, 00:08:06.369 "num_base_bdevs": 2, 00:08:06.369 "num_base_bdevs_discovered": 1, 00:08:06.369 "num_base_bdevs_operational": 2, 00:08:06.369 "base_bdevs_list": [ 00:08:06.369 { 00:08:06.369 "name": "BaseBdev1", 00:08:06.369 "uuid": "e6b1f07d-2711-11ef-b084-113036b5c18d", 00:08:06.369 "is_configured": true, 00:08:06.369 "data_offset": 2048, 00:08:06.369 "data_size": 63488 00:08:06.369 }, 00:08:06.369 { 00:08:06.369 "name": "BaseBdev2", 00:08:06.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.369 "is_configured": false, 00:08:06.369 "data_offset": 0, 00:08:06.369 "data_size": 0 00:08:06.369 } 00:08:06.369 ] 00:08:06.369 }' 00:08:06.369 10:12:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:06.369 10:12:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.628 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:06.887 [2024-06-10 10:12:12.264469] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.887 [2024-06-10 10:12:12.264494] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d081500 name Existed_Raid, state configuring 00:08:06.887 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:07.146 [2024-06-10 10:12:12.528511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.146 [2024-06-10 10:12:12.529200] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.146 [2024-06-10 10:12:12.529241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.146 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.405 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:07.405 "name": "Existed_Raid", 00:08:07.405 "uuid": "e7994aba-2711-11ef-b084-113036b5c18d", 00:08:07.405 "strip_size_kb": 0, 00:08:07.405 "state": "configuring", 00:08:07.405 "raid_level": "raid1", 00:08:07.405 "superblock": true, 00:08:07.405 "num_base_bdevs": 2, 00:08:07.405 "num_base_bdevs_discovered": 1, 00:08:07.405 "num_base_bdevs_operational": 2, 00:08:07.405 "base_bdevs_list": [ 00:08:07.405 { 00:08:07.405 "name": "BaseBdev1", 00:08:07.405 "uuid": "e6b1f07d-2711-11ef-b084-113036b5c18d", 00:08:07.405 "is_configured": true, 00:08:07.405 "data_offset": 2048, 00:08:07.405 "data_size": 63488 00:08:07.405 }, 00:08:07.405 { 00:08:07.405 "name": "BaseBdev2", 00:08:07.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.405 "is_configured": false, 00:08:07.405 "data_offset": 0, 00:08:07.405 "data_size": 0 00:08:07.405 } 00:08:07.405 ] 00:08:07.405 }' 00:08:07.405 10:12:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:07.405 10:12:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.664 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.923 [2024-06-10 10:12:13.404739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.923 [2024-06-10 10:12:13.404806] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d081a00 00:08:07.923 [2024-06-10 10:12:13.404810] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.923 [2024-06-10 10:12:13.404828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d0e4ec0 00:08:07.923 [2024-06-10 10:12:13.404860] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d081a00 00:08:07.923 [2024-06-10 10:12:13.404863] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d081a00 00:08:07.923 [2024-06-10 10:12:13.404878] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.923 BaseBdev2 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:07.923 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:08.182 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.441 [ 00:08:08.441 { 00:08:08.441 "name": "BaseBdev2", 00:08:08.441 "aliases": [ 00:08:08.441 "e81efaf9-2711-11ef-b084-113036b5c18d" 00:08:08.441 ], 00:08:08.441 "product_name": "Malloc disk", 00:08:08.441 "block_size": 512, 00:08:08.441 "num_blocks": 65536, 00:08:08.441 "uuid": "e81efaf9-2711-11ef-b084-113036b5c18d", 00:08:08.441 "assigned_rate_limits": { 00:08:08.441 "rw_ios_per_sec": 0, 00:08:08.441 "rw_mbytes_per_sec": 0, 00:08:08.441 "r_mbytes_per_sec": 0, 00:08:08.441 "w_mbytes_per_sec": 0 00:08:08.441 }, 00:08:08.441 "claimed": true, 00:08:08.441 "claim_type": "exclusive_write", 00:08:08.441 "zoned": false, 00:08:08.441 "supported_io_types": { 00:08:08.441 "read": true, 00:08:08.441 "write": true, 00:08:08.441 "unmap": true, 00:08:08.441 "write_zeroes": true, 00:08:08.441 "flush": true, 00:08:08.441 "reset": true, 00:08:08.441 "compare": false, 00:08:08.441 "compare_and_write": false, 00:08:08.441 "abort": true, 00:08:08.441 "nvme_admin": false, 00:08:08.441 "nvme_io": false 00:08:08.441 }, 00:08:08.441 "memory_domains": [ 00:08:08.441 { 00:08:08.441 "dma_device_id": "system", 00:08:08.441 "dma_device_type": 1 00:08:08.441 }, 00:08:08.441 { 00:08:08.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.441 "dma_device_type": 2 00:08:08.441 } 00:08:08.441 ], 00:08:08.441 "driver_specific": {} 00:08:08.441 } 00:08:08.441 ] 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.441 10:12:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.699 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:08.699 "name": "Existed_Raid", 00:08:08.699 "uuid": "e7994aba-2711-11ef-b084-113036b5c18d", 00:08:08.699 "strip_size_kb": 0, 00:08:08.699 "state": "online", 00:08:08.699 "raid_level": "raid1", 00:08:08.699 "superblock": true, 00:08:08.699 "num_base_bdevs": 2, 00:08:08.699 "num_base_bdevs_discovered": 2, 00:08:08.699 "num_base_bdevs_operational": 2, 00:08:08.699 "base_bdevs_list": [ 00:08:08.699 { 00:08:08.699 "name": "BaseBdev1", 00:08:08.699 "uuid": "e6b1f07d-2711-11ef-b084-113036b5c18d", 00:08:08.699 "is_configured": true, 00:08:08.699 "data_offset": 2048, 00:08:08.699 "data_size": 63488 00:08:08.699 }, 00:08:08.699 { 00:08:08.699 "name": "BaseBdev2", 00:08:08.699 "uuid": "e81efaf9-2711-11ef-b084-113036b5c18d", 00:08:08.699 "is_configured": true, 00:08:08.699 "data_offset": 2048, 00:08:08.699 "data_size": 63488 00:08:08.699 } 00:08:08.699 ] 00:08:08.699 }' 00:08:08.699 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:08.699 10:12:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:08.959 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:09.218 [2024-06-10 10:12:14.692825] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.218 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:09.218 "name": "Existed_Raid", 00:08:09.218 "aliases": [ 00:08:09.218 "e7994aba-2711-11ef-b084-113036b5c18d" 00:08:09.218 ], 00:08:09.218 "product_name": "Raid Volume", 00:08:09.218 "block_size": 512, 00:08:09.218 "num_blocks": 63488, 00:08:09.218 "uuid": "e7994aba-2711-11ef-b084-113036b5c18d", 00:08:09.218 "assigned_rate_limits": { 00:08:09.218 "rw_ios_per_sec": 0, 00:08:09.218 "rw_mbytes_per_sec": 0, 00:08:09.218 "r_mbytes_per_sec": 0, 00:08:09.218 "w_mbytes_per_sec": 0 00:08:09.218 }, 00:08:09.218 "claimed": false, 00:08:09.218 "zoned": false, 00:08:09.218 "supported_io_types": { 00:08:09.218 "read": true, 00:08:09.218 "write": true, 00:08:09.218 "unmap": false, 00:08:09.218 "write_zeroes": true, 00:08:09.218 "flush": false, 00:08:09.218 "reset": true, 00:08:09.218 "compare": false, 00:08:09.218 "compare_and_write": false, 00:08:09.218 "abort": false, 00:08:09.218 "nvme_admin": false, 00:08:09.218 "nvme_io": false 00:08:09.218 }, 00:08:09.218 "memory_domains": [ 00:08:09.218 { 00:08:09.218 "dma_device_id": "system", 00:08:09.218 "dma_device_type": 1 00:08:09.218 }, 00:08:09.218 { 00:08:09.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.218 "dma_device_type": 2 00:08:09.218 }, 00:08:09.218 { 00:08:09.218 "dma_device_id": "system", 00:08:09.218 "dma_device_type": 1 00:08:09.218 }, 00:08:09.218 { 00:08:09.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.218 "dma_device_type": 2 00:08:09.218 } 00:08:09.218 ], 00:08:09.218 "driver_specific": { 00:08:09.218 "raid": { 00:08:09.218 "uuid": "e7994aba-2711-11ef-b084-113036b5c18d", 00:08:09.218 "strip_size_kb": 0, 00:08:09.218 "state": "online", 00:08:09.218 "raid_level": "raid1", 00:08:09.218 "superblock": true, 00:08:09.218 "num_base_bdevs": 2, 00:08:09.218 "num_base_bdevs_discovered": 2, 00:08:09.218 "num_base_bdevs_operational": 2, 00:08:09.218 "base_bdevs_list": [ 00:08:09.219 { 00:08:09.219 "name": "BaseBdev1", 00:08:09.219 "uuid": "e6b1f07d-2711-11ef-b084-113036b5c18d", 00:08:09.219 "is_configured": true, 00:08:09.219 "data_offset": 2048, 00:08:09.219 "data_size": 63488 00:08:09.219 }, 00:08:09.219 { 00:08:09.219 "name": "BaseBdev2", 00:08:09.219 "uuid": "e81efaf9-2711-11ef-b084-113036b5c18d", 00:08:09.219 "is_configured": true, 00:08:09.219 "data_offset": 2048, 00:08:09.219 "data_size": 63488 00:08:09.219 } 00:08:09.219 ] 00:08:09.219 } 00:08:09.219 } 00:08:09.219 }' 00:08:09.219 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.219 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:09.219 BaseBdev2' 00:08:09.219 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:09.219 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:09.219 10:12:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:09.477 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:09.477 "name": "BaseBdev1", 00:08:09.477 "aliases": [ 00:08:09.477 "e6b1f07d-2711-11ef-b084-113036b5c18d" 00:08:09.477 ], 00:08:09.477 "product_name": "Malloc disk", 00:08:09.477 "block_size": 512, 00:08:09.477 "num_blocks": 65536, 00:08:09.477 "uuid": "e6b1f07d-2711-11ef-b084-113036b5c18d", 00:08:09.477 "assigned_rate_limits": { 00:08:09.477 "rw_ios_per_sec": 0, 00:08:09.477 "rw_mbytes_per_sec": 0, 00:08:09.477 "r_mbytes_per_sec": 0, 00:08:09.477 "w_mbytes_per_sec": 0 00:08:09.477 }, 00:08:09.477 "claimed": true, 00:08:09.477 "claim_type": "exclusive_write", 00:08:09.477 "zoned": false, 00:08:09.477 "supported_io_types": { 00:08:09.477 "read": true, 00:08:09.477 "write": true, 00:08:09.478 "unmap": true, 00:08:09.478 "write_zeroes": true, 00:08:09.478 "flush": true, 00:08:09.478 "reset": true, 00:08:09.478 "compare": false, 00:08:09.478 "compare_and_write": false, 00:08:09.478 "abort": true, 00:08:09.478 "nvme_admin": false, 00:08:09.478 "nvme_io": false 00:08:09.478 }, 00:08:09.478 "memory_domains": [ 00:08:09.478 { 00:08:09.478 "dma_device_id": "system", 00:08:09.478 "dma_device_type": 1 00:08:09.478 }, 00:08:09.478 { 00:08:09.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.478 "dma_device_type": 2 00:08:09.478 } 00:08:09.478 ], 00:08:09.478 "driver_specific": {} 00:08:09.478 }' 00:08:09.478 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:09.737 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:09.998 "name": "BaseBdev2", 00:08:09.998 "aliases": [ 00:08:09.998 "e81efaf9-2711-11ef-b084-113036b5c18d" 00:08:09.998 ], 00:08:09.998 "product_name": "Malloc disk", 00:08:09.998 "block_size": 512, 00:08:09.998 "num_blocks": 65536, 00:08:09.998 "uuid": "e81efaf9-2711-11ef-b084-113036b5c18d", 00:08:09.998 "assigned_rate_limits": { 00:08:09.998 "rw_ios_per_sec": 0, 00:08:09.998 "rw_mbytes_per_sec": 0, 00:08:09.998 "r_mbytes_per_sec": 0, 00:08:09.998 "w_mbytes_per_sec": 0 00:08:09.998 }, 00:08:09.998 "claimed": true, 00:08:09.998 "claim_type": "exclusive_write", 00:08:09.998 "zoned": false, 00:08:09.998 "supported_io_types": { 00:08:09.998 "read": true, 00:08:09.998 "write": true, 00:08:09.998 "unmap": true, 00:08:09.998 "write_zeroes": true, 00:08:09.998 "flush": true, 00:08:09.998 "reset": true, 00:08:09.998 "compare": false, 00:08:09.998 "compare_and_write": false, 00:08:09.998 "abort": true, 00:08:09.998 "nvme_admin": false, 00:08:09.998 "nvme_io": false 00:08:09.998 }, 00:08:09.998 "memory_domains": [ 00:08:09.998 { 00:08:09.998 "dma_device_id": "system", 00:08:09.998 "dma_device_type": 1 00:08:09.998 }, 00:08:09.998 { 00:08:09.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.998 "dma_device_type": 2 00:08:09.998 } 00:08:09.998 ], 00:08:09.998 "driver_specific": {} 00:08:09.998 }' 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:09.998 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:10.262 [2024-06-10 10:12:15.656915] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.262 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.520 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:10.520 "name": "Existed_Raid", 00:08:10.520 "uuid": "e7994aba-2711-11ef-b084-113036b5c18d", 00:08:10.520 "strip_size_kb": 0, 00:08:10.520 "state": "online", 00:08:10.520 "raid_level": "raid1", 00:08:10.520 "superblock": true, 00:08:10.520 "num_base_bdevs": 2, 00:08:10.520 "num_base_bdevs_discovered": 1, 00:08:10.520 "num_base_bdevs_operational": 1, 00:08:10.520 "base_bdevs_list": [ 00:08:10.520 { 00:08:10.520 "name": null, 00:08:10.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.520 "is_configured": false, 00:08:10.520 "data_offset": 2048, 00:08:10.520 "data_size": 63488 00:08:10.520 }, 00:08:10.520 { 00:08:10.520 "name": "BaseBdev2", 00:08:10.520 "uuid": "e81efaf9-2711-11ef-b084-113036b5c18d", 00:08:10.520 "is_configured": true, 00:08:10.520 "data_offset": 2048, 00:08:10.520 "data_size": 63488 00:08:10.520 } 00:08:10.520 ] 00:08:10.520 }' 00:08:10.520 10:12:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:10.520 10:12:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.779 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:10.779 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:10.779 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.779 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:11.346 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:11.346 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.346 10:12:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:11.605 [2024-06-10 10:12:17.089954] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.605 [2024-06-10 10:12:17.089981] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.605 [2024-06-10 10:12:17.094675] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.605 [2024-06-10 10:12:17.094686] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.605 [2024-06-10 10:12:17.094690] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d081a00 name Existed_Raid, state offline 00:08:11.605 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:11.605 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:11.605 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.605 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51805 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 51805 ']' 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 51805 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 51805 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:08:11.864 killing process with pid 51805 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 51805' 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 51805 00:08:11.864 [2024-06-10 10:12:17.393148] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.864 [2024-06-10 10:12:17.393183] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.864 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 51805 00:08:12.123 10:12:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:12.123 ************************************ 00:08:12.123 END TEST raid_state_function_test_sb 00:08:12.123 ************************************ 00:08:12.123 00:08:12.123 real 0m9.025s 00:08:12.123 user 0m15.730s 00:08:12.123 sys 0m1.594s 00:08:12.123 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:12.123 10:12:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 10:12:17 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:12.123 10:12:17 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:08:12.123 10:12:17 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:12.123 10:12:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 ************************************ 00:08:12.123 START TEST raid_superblock_test 00:08:12.123 ************************************ 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=52079 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 52079 /var/tmp/spdk-raid.sock 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 52079 ']' 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:12.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:12.123 10:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.123 [2024-06-10 10:12:17.614615] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:12.124 [2024-06-10 10:12:17.614805] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:12.691 EAL: TSC is not safe to use in SMP mode 00:08:12.691 EAL: TSC is not invariant 00:08:12.691 [2024-06-10 10:12:18.088203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.691 [2024-06-10 10:12:18.164805] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:12.691 [2024-06-10 10:12:18.166813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.691 [2024-06-10 10:12:18.167543] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.691 [2024-06-10 10:12:18.167555] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:13.260 malloc1 00:08:13.260 10:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.520 [2024-06-10 10:12:19.069632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.520 [2024-06-10 10:12:19.069684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.520 [2024-06-10 10:12:19.069694] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d116780 00:08:13.520 [2024-06-10 10:12:19.069700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.520 [2024-06-10 10:12:19.070410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.520 [2024-06-10 10:12:19.070440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.520 pt1 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:13.520 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:13.778 malloc2 00:08:13.778 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.037 [2024-06-10 10:12:19.465671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.037 [2024-06-10 10:12:19.465723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.037 [2024-06-10 10:12:19.465732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d116c80 00:08:14.037 [2024-06-10 10:12:19.465739] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.037 [2024-06-10 10:12:19.466226] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.037 [2024-06-10 10:12:19.466249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.037 pt2 00:08:14.037 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:14.037 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:14.037 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:14.296 [2024-06-10 10:12:19.661690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.296 [2024-06-10 10:12:19.662120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.296 [2024-06-10 10:12:19.662184] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d116f00 00:08:14.296 [2024-06-10 10:12:19.662189] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:14.296 [2024-06-10 10:12:19.662222] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d179e20 00:08:14.296 [2024-06-10 10:12:19.662284] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d116f00 00:08:14.296 [2024-06-10 10:12:19.662287] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d116f00 00:08:14.296 [2024-06-10 10:12:19.662307] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.296 "name": "raid_bdev1", 00:08:14.296 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:14.296 "strip_size_kb": 0, 00:08:14.296 "state": "online", 00:08:14.296 "raid_level": "raid1", 00:08:14.296 "superblock": true, 00:08:14.296 "num_base_bdevs": 2, 00:08:14.296 "num_base_bdevs_discovered": 2, 00:08:14.296 "num_base_bdevs_operational": 2, 00:08:14.296 "base_bdevs_list": [ 00:08:14.296 { 00:08:14.296 "name": "pt1", 00:08:14.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.296 "is_configured": true, 00:08:14.296 "data_offset": 2048, 00:08:14.296 "data_size": 63488 00:08:14.296 }, 00:08:14.296 { 00:08:14.296 "name": "pt2", 00:08:14.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.296 "is_configured": true, 00:08:14.296 "data_offset": 2048, 00:08:14.296 "data_size": 63488 00:08:14.296 } 00:08:14.296 ] 00:08:14.296 }' 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.296 10:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:14.588 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:14.847 [2024-06-10 10:12:20.409834] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.847 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:14.847 "name": "raid_bdev1", 00:08:14.847 "aliases": [ 00:08:14.847 "ebd9ba70-2711-11ef-b084-113036b5c18d" 00:08:14.847 ], 00:08:14.847 "product_name": "Raid Volume", 00:08:14.847 "block_size": 512, 00:08:14.847 "num_blocks": 63488, 00:08:14.847 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:14.847 "assigned_rate_limits": { 00:08:14.847 "rw_ios_per_sec": 0, 00:08:14.847 "rw_mbytes_per_sec": 0, 00:08:14.847 "r_mbytes_per_sec": 0, 00:08:14.847 "w_mbytes_per_sec": 0 00:08:14.847 }, 00:08:14.847 "claimed": false, 00:08:14.847 "zoned": false, 00:08:14.847 "supported_io_types": { 00:08:14.847 "read": true, 00:08:14.847 "write": true, 00:08:14.847 "unmap": false, 00:08:14.847 "write_zeroes": true, 00:08:14.847 "flush": false, 00:08:14.847 "reset": true, 00:08:14.847 "compare": false, 00:08:14.847 "compare_and_write": false, 00:08:14.847 "abort": false, 00:08:14.847 "nvme_admin": false, 00:08:14.847 "nvme_io": false 00:08:14.847 }, 00:08:14.847 "memory_domains": [ 00:08:14.847 { 00:08:14.847 "dma_device_id": "system", 00:08:14.847 "dma_device_type": 1 00:08:14.847 }, 00:08:14.847 { 00:08:14.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.847 "dma_device_type": 2 00:08:14.847 }, 00:08:14.847 { 00:08:14.847 "dma_device_id": "system", 00:08:14.847 "dma_device_type": 1 00:08:14.847 }, 00:08:14.847 { 00:08:14.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.847 "dma_device_type": 2 00:08:14.847 } 00:08:14.847 ], 00:08:14.847 "driver_specific": { 00:08:14.847 "raid": { 00:08:14.847 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:14.847 "strip_size_kb": 0, 00:08:14.847 "state": "online", 00:08:14.847 "raid_level": "raid1", 00:08:14.847 "superblock": true, 00:08:14.847 "num_base_bdevs": 2, 00:08:14.847 "num_base_bdevs_discovered": 2, 00:08:14.847 "num_base_bdevs_operational": 2, 00:08:14.847 "base_bdevs_list": [ 00:08:14.847 { 00:08:14.847 "name": "pt1", 00:08:14.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.847 "is_configured": true, 00:08:14.847 "data_offset": 2048, 00:08:14.847 "data_size": 63488 00:08:14.847 }, 00:08:14.847 { 00:08:14.847 "name": "pt2", 00:08:14.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.847 "is_configured": true, 00:08:14.847 "data_offset": 2048, 00:08:14.847 "data_size": 63488 00:08:14.847 } 00:08:14.847 ] 00:08:14.847 } 00:08:14.847 } 00:08:14.847 }' 00:08:14.847 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.847 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:14.847 pt2' 00:08:14.847 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:14.847 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:14.847 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:15.414 "name": "pt1", 00:08:15.414 "aliases": [ 00:08:15.414 "00000000-0000-0000-0000-000000000001" 00:08:15.414 ], 00:08:15.414 "product_name": "passthru", 00:08:15.414 "block_size": 512, 00:08:15.414 "num_blocks": 65536, 00:08:15.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.414 "assigned_rate_limits": { 00:08:15.414 "rw_ios_per_sec": 0, 00:08:15.414 "rw_mbytes_per_sec": 0, 00:08:15.414 "r_mbytes_per_sec": 0, 00:08:15.414 "w_mbytes_per_sec": 0 00:08:15.414 }, 00:08:15.414 "claimed": true, 00:08:15.414 "claim_type": "exclusive_write", 00:08:15.414 "zoned": false, 00:08:15.414 "supported_io_types": { 00:08:15.414 "read": true, 00:08:15.414 "write": true, 00:08:15.414 "unmap": true, 00:08:15.414 "write_zeroes": true, 00:08:15.414 "flush": true, 00:08:15.414 "reset": true, 00:08:15.414 "compare": false, 00:08:15.414 "compare_and_write": false, 00:08:15.414 "abort": true, 00:08:15.414 "nvme_admin": false, 00:08:15.414 "nvme_io": false 00:08:15.414 }, 00:08:15.414 "memory_domains": [ 00:08:15.414 { 00:08:15.414 "dma_device_id": "system", 00:08:15.414 "dma_device_type": 1 00:08:15.414 }, 00:08:15.414 { 00:08:15.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.414 "dma_device_type": 2 00:08:15.414 } 00:08:15.414 ], 00:08:15.414 "driver_specific": { 00:08:15.414 "passthru": { 00:08:15.414 "name": "pt1", 00:08:15.414 "base_bdev_name": "malloc1" 00:08:15.414 } 00:08:15.414 } 00:08:15.414 }' 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:15.414 10:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:15.673 "name": "pt2", 00:08:15.673 "aliases": [ 00:08:15.673 "00000000-0000-0000-0000-000000000002" 00:08:15.673 ], 00:08:15.673 "product_name": "passthru", 00:08:15.673 "block_size": 512, 00:08:15.673 "num_blocks": 65536, 00:08:15.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.673 "assigned_rate_limits": { 00:08:15.673 "rw_ios_per_sec": 0, 00:08:15.673 "rw_mbytes_per_sec": 0, 00:08:15.673 "r_mbytes_per_sec": 0, 00:08:15.673 "w_mbytes_per_sec": 0 00:08:15.673 }, 00:08:15.673 "claimed": true, 00:08:15.673 "claim_type": "exclusive_write", 00:08:15.673 "zoned": false, 00:08:15.673 "supported_io_types": { 00:08:15.673 "read": true, 00:08:15.673 "write": true, 00:08:15.673 "unmap": true, 00:08:15.673 "write_zeroes": true, 00:08:15.673 "flush": true, 00:08:15.673 "reset": true, 00:08:15.673 "compare": false, 00:08:15.673 "compare_and_write": false, 00:08:15.673 "abort": true, 00:08:15.673 "nvme_admin": false, 00:08:15.673 "nvme_io": false 00:08:15.673 }, 00:08:15.673 "memory_domains": [ 00:08:15.673 { 00:08:15.673 "dma_device_id": "system", 00:08:15.673 "dma_device_type": 1 00:08:15.673 }, 00:08:15.673 { 00:08:15.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.673 "dma_device_type": 2 00:08:15.673 } 00:08:15.673 ], 00:08:15.673 "driver_specific": { 00:08:15.673 "passthru": { 00:08:15.673 "name": "pt2", 00:08:15.673 "base_bdev_name": "malloc2" 00:08:15.673 } 00:08:15.673 } 00:08:15.673 }' 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:15.673 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:15.932 [2024-06-10 10:12:21.381921] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.932 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=ebd9ba70-2711-11ef-b084-113036b5c18d 00:08:15.932 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z ebd9ba70-2711-11ef-b084-113036b5c18d ']' 00:08:15.932 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:16.191 [2024-06-10 10:12:21.649915] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.191 [2024-06-10 10:12:21.649938] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.191 [2024-06-10 10:12:21.649954] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.191 [2024-06-10 10:12:21.649966] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.191 [2024-06-10 10:12:21.649970] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d116f00 name raid_bdev1, state offline 00:08:16.191 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:16.191 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.448 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:16.448 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:16.448 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:16.448 10:12:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:16.733 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:16.733 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:16.733 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:16.733 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:16.990 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:17.248 [2024-06-10 10:12:22.646029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:17.248 [2024-06-10 10:12:22.646469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:17.248 [2024-06-10 10:12:22.646486] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:17.248 [2024-06-10 10:12:22.646519] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:17.248 [2024-06-10 10:12:22.646527] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:17.248 [2024-06-10 10:12:22.646531] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d116c80 name raid_bdev1, state configuring 00:08:17.248 request: 00:08:17.248 { 00:08:17.248 "name": "raid_bdev1", 00:08:17.248 "raid_level": "raid1", 00:08:17.248 "base_bdevs": [ 00:08:17.248 "malloc1", 00:08:17.248 "malloc2" 00:08:17.248 ], 00:08:17.248 "superblock": false, 00:08:17.248 "method": "bdev_raid_create", 00:08:17.248 "req_id": 1 00:08:17.248 } 00:08:17.248 Got JSON-RPC error response 00:08:17.248 response: 00:08:17.248 { 00:08:17.248 "code": -17, 00:08:17.248 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:17.248 } 00:08:17.248 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:08:17.248 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:08:17.248 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:08:17.248 10:12:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:08:17.248 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.248 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:17.507 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:17.507 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:17.507 10:12:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:17.507 [2024-06-10 10:12:23.094061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:17.507 [2024-06-10 10:12:23.094110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.507 [2024-06-10 10:12:23.094120] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d116780 00:08:17.507 [2024-06-10 10:12:23.094127] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.507 [2024-06-10 10:12:23.094604] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.507 [2024-06-10 10:12:23.094625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:17.507 [2024-06-10 10:12:23.094645] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:17.507 [2024-06-10 10:12:23.094655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:17.507 pt1 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:17.766 "name": "raid_bdev1", 00:08:17.766 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:17.766 "strip_size_kb": 0, 00:08:17.766 "state": "configuring", 00:08:17.766 "raid_level": "raid1", 00:08:17.766 "superblock": true, 00:08:17.766 "num_base_bdevs": 2, 00:08:17.766 "num_base_bdevs_discovered": 1, 00:08:17.766 "num_base_bdevs_operational": 2, 00:08:17.766 "base_bdevs_list": [ 00:08:17.766 { 00:08:17.766 "name": "pt1", 00:08:17.766 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.766 "is_configured": true, 00:08:17.766 "data_offset": 2048, 00:08:17.766 "data_size": 63488 00:08:17.766 }, 00:08:17.766 { 00:08:17.766 "name": null, 00:08:17.766 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.766 "is_configured": false, 00:08:17.766 "data_offset": 2048, 00:08:17.766 "data_size": 63488 00:08:17.766 } 00:08:17.766 ] 00:08:17.766 }' 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:17.766 10:12:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.024 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:18.024 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:18.024 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:18.024 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:18.283 [2024-06-10 10:12:23.798130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:18.283 [2024-06-10 10:12:23.798180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.283 [2024-06-10 10:12:23.798191] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d116f00 00:08:18.283 [2024-06-10 10:12:23.798198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.283 [2024-06-10 10:12:23.798285] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.283 [2024-06-10 10:12:23.798293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:18.283 [2024-06-10 10:12:23.798312] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:18.283 [2024-06-10 10:12:23.798318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:18.283 [2024-06-10 10:12:23.798345] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d117180 00:08:18.283 [2024-06-10 10:12:23.798349] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:18.283 [2024-06-10 10:12:23.798364] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d179e20 00:08:18.283 [2024-06-10 10:12:23.798397] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d117180 00:08:18.283 [2024-06-10 10:12:23.798400] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d117180 00:08:18.283 [2024-06-10 10:12:23.798415] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.283 pt2 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.283 10:12:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.545 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:18.545 "name": "raid_bdev1", 00:08:18.545 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:18.545 "strip_size_kb": 0, 00:08:18.545 "state": "online", 00:08:18.545 "raid_level": "raid1", 00:08:18.545 "superblock": true, 00:08:18.545 "num_base_bdevs": 2, 00:08:18.545 "num_base_bdevs_discovered": 2, 00:08:18.545 "num_base_bdevs_operational": 2, 00:08:18.545 "base_bdevs_list": [ 00:08:18.545 { 00:08:18.545 "name": "pt1", 00:08:18.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:18.545 "is_configured": true, 00:08:18.545 "data_offset": 2048, 00:08:18.545 "data_size": 63488 00:08:18.545 }, 00:08:18.545 { 00:08:18.545 "name": "pt2", 00:08:18.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:18.545 "is_configured": true, 00:08:18.545 "data_offset": 2048, 00:08:18.545 "data_size": 63488 00:08:18.545 } 00:08:18.545 ] 00:08:18.545 }' 00:08:18.545 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:18.545 10:12:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:18.804 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:19.062 [2024-06-10 10:12:24.522227] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.062 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:19.062 "name": "raid_bdev1", 00:08:19.062 "aliases": [ 00:08:19.062 "ebd9ba70-2711-11ef-b084-113036b5c18d" 00:08:19.062 ], 00:08:19.062 "product_name": "Raid Volume", 00:08:19.062 "block_size": 512, 00:08:19.062 "num_blocks": 63488, 00:08:19.062 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:19.062 "assigned_rate_limits": { 00:08:19.062 "rw_ios_per_sec": 0, 00:08:19.062 "rw_mbytes_per_sec": 0, 00:08:19.062 "r_mbytes_per_sec": 0, 00:08:19.062 "w_mbytes_per_sec": 0 00:08:19.062 }, 00:08:19.062 "claimed": false, 00:08:19.062 "zoned": false, 00:08:19.062 "supported_io_types": { 00:08:19.062 "read": true, 00:08:19.062 "write": true, 00:08:19.062 "unmap": false, 00:08:19.062 "write_zeroes": true, 00:08:19.062 "flush": false, 00:08:19.062 "reset": true, 00:08:19.062 "compare": false, 00:08:19.062 "compare_and_write": false, 00:08:19.062 "abort": false, 00:08:19.062 "nvme_admin": false, 00:08:19.062 "nvme_io": false 00:08:19.062 }, 00:08:19.062 "memory_domains": [ 00:08:19.062 { 00:08:19.062 "dma_device_id": "system", 00:08:19.062 "dma_device_type": 1 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.062 "dma_device_type": 2 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "dma_device_id": "system", 00:08:19.062 "dma_device_type": 1 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.062 "dma_device_type": 2 00:08:19.062 } 00:08:19.062 ], 00:08:19.062 "driver_specific": { 00:08:19.062 "raid": { 00:08:19.062 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:19.062 "strip_size_kb": 0, 00:08:19.062 "state": "online", 00:08:19.062 "raid_level": "raid1", 00:08:19.062 "superblock": true, 00:08:19.062 "num_base_bdevs": 2, 00:08:19.062 "num_base_bdevs_discovered": 2, 00:08:19.062 "num_base_bdevs_operational": 2, 00:08:19.062 "base_bdevs_list": [ 00:08:19.062 { 00:08:19.062 "name": "pt1", 00:08:19.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.062 "is_configured": true, 00:08:19.062 "data_offset": 2048, 00:08:19.062 "data_size": 63488 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "name": "pt2", 00:08:19.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.062 "is_configured": true, 00:08:19.062 "data_offset": 2048, 00:08:19.062 "data_size": 63488 00:08:19.062 } 00:08:19.062 ] 00:08:19.062 } 00:08:19.062 } 00:08:19.062 }' 00:08:19.062 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.062 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:19.062 pt2' 00:08:19.062 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:19.062 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:19.062 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:19.321 "name": "pt1", 00:08:19.321 "aliases": [ 00:08:19.321 "00000000-0000-0000-0000-000000000001" 00:08:19.321 ], 00:08:19.321 "product_name": "passthru", 00:08:19.321 "block_size": 512, 00:08:19.321 "num_blocks": 65536, 00:08:19.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:19.321 "assigned_rate_limits": { 00:08:19.321 "rw_ios_per_sec": 0, 00:08:19.321 "rw_mbytes_per_sec": 0, 00:08:19.321 "r_mbytes_per_sec": 0, 00:08:19.321 "w_mbytes_per_sec": 0 00:08:19.321 }, 00:08:19.321 "claimed": true, 00:08:19.321 "claim_type": "exclusive_write", 00:08:19.321 "zoned": false, 00:08:19.321 "supported_io_types": { 00:08:19.321 "read": true, 00:08:19.321 "write": true, 00:08:19.321 "unmap": true, 00:08:19.321 "write_zeroes": true, 00:08:19.321 "flush": true, 00:08:19.321 "reset": true, 00:08:19.321 "compare": false, 00:08:19.321 "compare_and_write": false, 00:08:19.321 "abort": true, 00:08:19.321 "nvme_admin": false, 00:08:19.321 "nvme_io": false 00:08:19.321 }, 00:08:19.321 "memory_domains": [ 00:08:19.321 { 00:08:19.321 "dma_device_id": "system", 00:08:19.321 "dma_device_type": 1 00:08:19.321 }, 00:08:19.321 { 00:08:19.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.321 "dma_device_type": 2 00:08:19.321 } 00:08:19.321 ], 00:08:19.321 "driver_specific": { 00:08:19.321 "passthru": { 00:08:19.321 "name": "pt1", 00:08:19.321 "base_bdev_name": "malloc1" 00:08:19.321 } 00:08:19.321 } 00:08:19.321 }' 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:19.321 10:12:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:19.888 "name": "pt2", 00:08:19.888 "aliases": [ 00:08:19.888 "00000000-0000-0000-0000-000000000002" 00:08:19.888 ], 00:08:19.888 "product_name": "passthru", 00:08:19.888 "block_size": 512, 00:08:19.888 "num_blocks": 65536, 00:08:19.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:19.888 "assigned_rate_limits": { 00:08:19.888 "rw_ios_per_sec": 0, 00:08:19.888 "rw_mbytes_per_sec": 0, 00:08:19.888 "r_mbytes_per_sec": 0, 00:08:19.888 "w_mbytes_per_sec": 0 00:08:19.888 }, 00:08:19.888 "claimed": true, 00:08:19.888 "claim_type": "exclusive_write", 00:08:19.888 "zoned": false, 00:08:19.888 "supported_io_types": { 00:08:19.888 "read": true, 00:08:19.888 "write": true, 00:08:19.888 "unmap": true, 00:08:19.888 "write_zeroes": true, 00:08:19.888 "flush": true, 00:08:19.888 "reset": true, 00:08:19.888 "compare": false, 00:08:19.888 "compare_and_write": false, 00:08:19.888 "abort": true, 00:08:19.888 "nvme_admin": false, 00:08:19.888 "nvme_io": false 00:08:19.888 }, 00:08:19.888 "memory_domains": [ 00:08:19.888 { 00:08:19.888 "dma_device_id": "system", 00:08:19.888 "dma_device_type": 1 00:08:19.888 }, 00:08:19.888 { 00:08:19.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.888 "dma_device_type": 2 00:08:19.888 } 00:08:19.888 ], 00:08:19.888 "driver_specific": { 00:08:19.888 "passthru": { 00:08:19.888 "name": "pt2", 00:08:19.888 "base_bdev_name": "malloc2" 00:08:19.888 } 00:08:19.888 } 00:08:19.888 }' 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:19.888 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:20.146 [2024-06-10 10:12:25.536244] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.146 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' ebd9ba70-2711-11ef-b084-113036b5c18d '!=' ebd9ba70-2711-11ef-b084-113036b5c18d ']' 00:08:20.146 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:08:20.146 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:20.146 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:20.146 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:20.404 [2024-06-10 10:12:25.780258] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:20.404 "name": "raid_bdev1", 00:08:20.404 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:20.404 "strip_size_kb": 0, 00:08:20.404 "state": "online", 00:08:20.404 "raid_level": "raid1", 00:08:20.404 "superblock": true, 00:08:20.404 "num_base_bdevs": 2, 00:08:20.404 "num_base_bdevs_discovered": 1, 00:08:20.404 "num_base_bdevs_operational": 1, 00:08:20.404 "base_bdevs_list": [ 00:08:20.404 { 00:08:20.404 "name": null, 00:08:20.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.404 "is_configured": false, 00:08:20.404 "data_offset": 2048, 00:08:20.404 "data_size": 63488 00:08:20.404 }, 00:08:20.404 { 00:08:20.404 "name": "pt2", 00:08:20.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:20.404 "is_configured": true, 00:08:20.404 "data_offset": 2048, 00:08:20.404 "data_size": 63488 00:08:20.404 } 00:08:20.404 ] 00:08:20.404 }' 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:20.404 10:12:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.971 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:20.971 [2024-06-10 10:12:26.532304] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.971 [2024-06-10 10:12:26.532327] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.971 [2024-06-10 10:12:26.532345] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.971 [2024-06-10 10:12:26.532356] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.971 [2024-06-10 10:12:26.532360] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d117180 name raid_bdev1, state offline 00:08:20.971 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.971 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:08:21.230 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:08:21.230 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:08:21.230 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:08:21.230 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:21.230 10:12:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:21.488 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:08:21.488 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:21.488 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:08:21.488 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:08:21.488 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:08:21.488 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.746 [2024-06-10 10:12:27.248385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.746 [2024-06-10 10:12:27.248451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.746 [2024-06-10 10:12:27.248471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d116f00 00:08:21.746 [2024-06-10 10:12:27.248478] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.746 [2024-06-10 10:12:27.248985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.746 [2024-06-10 10:12:27.249017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.746 [2024-06-10 10:12:27.249038] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:21.746 [2024-06-10 10:12:27.249049] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.746 [2024-06-10 10:12:27.249069] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d117180 00:08:21.746 [2024-06-10 10:12:27.249072] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.746 [2024-06-10 10:12:27.249092] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d179e20 00:08:21.746 [2024-06-10 10:12:27.249124] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d117180 00:08:21.746 [2024-06-10 10:12:27.249127] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d117180 00:08:21.746 [2024-06-10 10:12:27.249143] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.746 pt2 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.746 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.004 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:22.004 "name": "raid_bdev1", 00:08:22.004 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:22.004 "strip_size_kb": 0, 00:08:22.004 "state": "online", 00:08:22.004 "raid_level": "raid1", 00:08:22.004 "superblock": true, 00:08:22.004 "num_base_bdevs": 2, 00:08:22.004 "num_base_bdevs_discovered": 1, 00:08:22.004 "num_base_bdevs_operational": 1, 00:08:22.004 "base_bdevs_list": [ 00:08:22.004 { 00:08:22.004 "name": null, 00:08:22.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.004 "is_configured": false, 00:08:22.004 "data_offset": 2048, 00:08:22.004 "data_size": 63488 00:08:22.004 }, 00:08:22.004 { 00:08:22.004 "name": "pt2", 00:08:22.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.004 "is_configured": true, 00:08:22.004 "data_offset": 2048, 00:08:22.004 "data_size": 63488 00:08:22.004 } 00:08:22.004 ] 00:08:22.004 }' 00:08:22.004 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:22.004 10:12:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.262 10:12:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:22.521 [2024-06-10 10:12:28.016443] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.521 [2024-06-10 10:12:28.016469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.521 [2024-06-10 10:12:28.016490] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.521 [2024-06-10 10:12:28.016501] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.521 [2024-06-10 10:12:28.016505] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d117180 name raid_bdev1, state offline 00:08:22.521 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:08:22.521 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.780 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:08:22.780 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:08:22.780 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:08:22.780 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.038 [2024-06-10 10:12:28.456494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.038 [2024-06-10 10:12:28.456551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.038 [2024-06-10 10:12:28.456564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d116c80 00:08:23.038 [2024-06-10 10:12:28.456571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.038 [2024-06-10 10:12:28.457094] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.038 [2024-06-10 10:12:28.457128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.038 [2024-06-10 10:12:28.457148] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:23.038 [2024-06-10 10:12:28.457158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.038 [2024-06-10 10:12:28.457180] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:23.038 [2024-06-10 10:12:28.457184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.038 [2024-06-10 10:12:28.457188] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d116780 name raid_bdev1, state configuring 00:08:23.038 [2024-06-10 10:12:28.457194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.038 [2024-06-10 10:12:28.457206] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d116780 00:08:23.038 [2024-06-10 10:12:28.457209] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.038 [2024-06-10 10:12:28.457226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d179e20 00:08:23.038 [2024-06-10 10:12:28.457254] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d116780 00:08:23.038 [2024-06-10 10:12:28.457257] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d116780 00:08:23.038 [2024-06-10 10:12:28.457271] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.038 pt1 00:08:23.038 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:08:23.038 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.038 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:23.038 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.039 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.297 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:23.297 "name": "raid_bdev1", 00:08:23.297 "uuid": "ebd9ba70-2711-11ef-b084-113036b5c18d", 00:08:23.297 "strip_size_kb": 0, 00:08:23.297 "state": "online", 00:08:23.297 "raid_level": "raid1", 00:08:23.297 "superblock": true, 00:08:23.297 "num_base_bdevs": 2, 00:08:23.297 "num_base_bdevs_discovered": 1, 00:08:23.297 "num_base_bdevs_operational": 1, 00:08:23.297 "base_bdevs_list": [ 00:08:23.297 { 00:08:23.297 "name": null, 00:08:23.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.297 "is_configured": false, 00:08:23.297 "data_offset": 2048, 00:08:23.297 "data_size": 63488 00:08:23.297 }, 00:08:23.297 { 00:08:23.297 "name": "pt2", 00:08:23.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.297 "is_configured": true, 00:08:23.297 "data_offset": 2048, 00:08:23.297 "data_size": 63488 00:08:23.297 } 00:08:23.297 ] 00:08:23.297 }' 00:08:23.297 10:12:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:23.297 10:12:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.556 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:23.556 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:23.814 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:08:23.814 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:23.814 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:08:24.072 [2024-06-10 10:12:29.578053] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' ebd9ba70-2711-11ef-b084-113036b5c18d '!=' ebd9ba70-2711-11ef-b084-113036b5c18d ']' 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 52079 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 52079 ']' 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 52079 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 52079 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:08:24.072 killing process with pid 52079 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 52079' 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 52079 00:08:24.072 [2024-06-10 10:12:29.610230] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.072 [2024-06-10 10:12:29.610259] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.072 [2024-06-10 10:12:29.610270] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.072 [2024-06-10 10:12:29.610275] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d116780 name raid_bdev1, state offline 00:08:24.072 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 52079 00:08:24.072 [2024-06-10 10:12:29.619811] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.331 10:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:24.331 00:08:24.331 real 0m12.177s 00:08:24.331 user 0m21.773s 00:08:24.331 sys 0m1.915s 00:08:24.331 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:24.331 10:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.331 ************************************ 00:08:24.331 END TEST raid_superblock_test 00:08:24.331 ************************************ 00:08:24.331 10:12:29 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:24.331 10:12:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:08:24.331 10:12:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:24.331 10:12:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.331 ************************************ 00:08:24.331 START TEST raid_read_error_test 00:08:24.331 ************************************ 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 2 read 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.CF7jooQU 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=52468 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 52468 /var/tmp/spdk-raid.sock 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 52468 ']' 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:24.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:24.331 10:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.331 [2024-06-10 10:12:29.841615] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:24.331 [2024-06-10 10:12:29.841826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:24.898 EAL: TSC is not safe to use in SMP mode 00:08:24.898 EAL: TSC is not invariant 00:08:24.898 [2024-06-10 10:12:30.278688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.898 [2024-06-10 10:12:30.355542] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:24.898 [2024-06-10 10:12:30.357628] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.898 [2024-06-10 10:12:30.358245] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.898 [2024-06-10 10:12:30.358256] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.465 10:12:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:25.465 10:12:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:08:25.465 10:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:25.465 10:12:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.465 BaseBdev1_malloc 00:08:25.465 10:12:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:25.723 true 00:08:25.723 10:12:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.983 [2024-06-10 10:12:31.444428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.983 [2024-06-10 10:12:31.444488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.983 [2024-06-10 10:12:31.444512] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b94c780 00:08:25.983 [2024-06-10 10:12:31.444520] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.983 [2024-06-10 10:12:31.445067] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.983 [2024-06-10 10:12:31.445098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.983 BaseBdev1 00:08:25.983 10:12:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:25.983 10:12:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:26.242 BaseBdev2_malloc 00:08:26.242 10:12:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:26.242 true 00:08:26.242 10:12:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:26.509 [2024-06-10 10:12:32.052472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:26.509 [2024-06-10 10:12:32.052525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.509 [2024-06-10 10:12:32.052567] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b94cc80 00:08:26.509 [2024-06-10 10:12:32.052575] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.509 [2024-06-10 10:12:32.053101] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.509 [2024-06-10 10:12:32.053130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:26.509 BaseBdev2 00:08:26.509 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:26.779 [2024-06-10 10:12:32.260490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.779 [2024-06-10 10:12:32.260921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.779 [2024-06-10 10:12:32.260976] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b94cf00 00:08:26.779 [2024-06-10 10:12:32.260981] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.779 [2024-06-10 10:12:32.261008] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b9b8e20 00:08:26.779 [2024-06-10 10:12:32.261062] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b94cf00 00:08:26.779 [2024-06-10 10:12:32.261065] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b94cf00 00:08:26.780 [2024-06-10 10:12:32.261086] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.780 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.038 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.038 "name": "raid_bdev1", 00:08:27.038 "uuid": "f35c271c-2711-11ef-b084-113036b5c18d", 00:08:27.038 "strip_size_kb": 0, 00:08:27.038 "state": "online", 00:08:27.038 "raid_level": "raid1", 00:08:27.038 "superblock": true, 00:08:27.038 "num_base_bdevs": 2, 00:08:27.038 "num_base_bdevs_discovered": 2, 00:08:27.038 "num_base_bdevs_operational": 2, 00:08:27.038 "base_bdevs_list": [ 00:08:27.038 { 00:08:27.038 "name": "BaseBdev1", 00:08:27.038 "uuid": "a12cc235-8385-505b-a26c-7977d64c3489", 00:08:27.038 "is_configured": true, 00:08:27.038 "data_offset": 2048, 00:08:27.038 "data_size": 63488 00:08:27.038 }, 00:08:27.038 { 00:08:27.038 "name": "BaseBdev2", 00:08:27.038 "uuid": "6c29e80d-b8cb-3f52-90fb-72542ddb0387", 00:08:27.038 "is_configured": true, 00:08:27.038 "data_offset": 2048, 00:08:27.038 "data_size": 63488 00:08:27.038 } 00:08:27.038 ] 00:08:27.038 }' 00:08:27.038 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.038 10:12:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.298 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:27.298 10:12:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:27.298 [2024-06-10 10:12:32.900579] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b9b8ec0 00:08:28.236 10:12:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.495 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.754 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:28.754 "name": "raid_bdev1", 00:08:28.754 "uuid": "f35c271c-2711-11ef-b084-113036b5c18d", 00:08:28.754 "strip_size_kb": 0, 00:08:28.754 "state": "online", 00:08:28.754 "raid_level": "raid1", 00:08:28.754 "superblock": true, 00:08:28.754 "num_base_bdevs": 2, 00:08:28.754 "num_base_bdevs_discovered": 2, 00:08:28.754 "num_base_bdevs_operational": 2, 00:08:28.754 "base_bdevs_list": [ 00:08:28.754 { 00:08:28.754 "name": "BaseBdev1", 00:08:28.754 "uuid": "a12cc235-8385-505b-a26c-7977d64c3489", 00:08:28.754 "is_configured": true, 00:08:28.754 "data_offset": 2048, 00:08:28.754 "data_size": 63488 00:08:28.754 }, 00:08:28.754 { 00:08:28.754 "name": "BaseBdev2", 00:08:28.754 "uuid": "6c29e80d-b8cb-3f52-90fb-72542ddb0387", 00:08:28.754 "is_configured": true, 00:08:28.754 "data_offset": 2048, 00:08:28.754 "data_size": 63488 00:08:28.754 } 00:08:28.754 ] 00:08:28.754 }' 00:08:28.754 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:28.754 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:29.322 [2024-06-10 10:12:34.880053] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.322 [2024-06-10 10:12:34.880080] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.322 [2024-06-10 10:12:34.880382] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.322 [2024-06-10 10:12:34.880390] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.322 [2024-06-10 10:12:34.880403] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.322 [2024-06-10 10:12:34.880407] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b94cf00 name raid_bdev1, state offline 00:08:29.322 0 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 52468 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 52468 ']' 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 52468 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 52468 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:08:29.322 killing process with pid 52468 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 52468' 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 52468 00:08:29.322 [2024-06-10 10:12:34.912457] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.322 10:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 52468 00:08:29.322 [2024-06-10 10:12:34.922044] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.CF7jooQU 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:29.581 00:08:29.581 real 0m5.270s 00:08:29.581 user 0m7.882s 00:08:29.581 sys 0m0.988s 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:29.581 10:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.581 ************************************ 00:08:29.581 END TEST raid_read_error_test 00:08:29.581 ************************************ 00:08:29.581 10:12:35 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:29.581 10:12:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:08:29.581 10:12:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:29.581 10:12:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.581 ************************************ 00:08:29.581 START TEST raid_write_error_test 00:08:29.581 ************************************ 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 2 write 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.HM9l122Z 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=52592 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 52592 /var/tmp/spdk-raid.sock 00:08:29.581 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 52592 ']' 00:08:29.582 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:29.582 10:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.582 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:29.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:29.582 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:29.582 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:29.582 10:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.582 [2024-06-10 10:12:35.160860] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:29.582 [2024-06-10 10:12:35.161009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:30.151 EAL: TSC is not safe to use in SMP mode 00:08:30.151 EAL: TSC is not invariant 00:08:30.151 [2024-06-10 10:12:35.625906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.151 [2024-06-10 10:12:35.744644] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:30.151 [2024-06-10 10:12:35.747927] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.151 [2024-06-10 10:12:35.749090] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.151 [2024-06-10 10:12:35.749116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.728 10:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:30.728 10:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:08:30.728 10:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:30.728 10:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.007 BaseBdev1_malloc 00:08:31.007 10:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:31.266 true 00:08:31.266 10:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.525 [2024-06-10 10:12:36.951501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.525 [2024-06-10 10:12:36.951561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.525 [2024-06-10 10:12:36.951587] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82afc4780 00:08:31.525 [2024-06-10 10:12:36.951595] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.525 [2024-06-10 10:12:36.952125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.525 [2024-06-10 10:12:36.952149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.525 BaseBdev1 00:08:31.525 10:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:31.525 10:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.783 BaseBdev2_malloc 00:08:31.783 10:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:32.041 true 00:08:32.041 10:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.300 [2024-06-10 10:12:37.815573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.300 [2024-06-10 10:12:37.815633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.300 [2024-06-10 10:12:37.815660] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82afc4c80 00:08:32.300 [2024-06-10 10:12:37.815668] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.300 [2024-06-10 10:12:37.816265] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.300 [2024-06-10 10:12:37.816295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.300 BaseBdev2 00:08:32.300 10:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:32.558 [2024-06-10 10:12:38.123606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.558 [2024-06-10 10:12:38.124076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.558 [2024-06-10 10:12:38.124143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82afc4f00 00:08:32.558 [2024-06-10 10:12:38.124149] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.558 [2024-06-10 10:12:38.124179] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b030e20 00:08:32.558 [2024-06-10 10:12:38.124237] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82afc4f00 00:08:32.558 [2024-06-10 10:12:38.124241] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82afc4f00 00:08:32.558 [2024-06-10 10:12:38.124265] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.558 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.815 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:32.815 "name": "raid_bdev1", 00:08:32.815 "uuid": "f6dacae5-2711-11ef-b084-113036b5c18d", 00:08:32.815 "strip_size_kb": 0, 00:08:32.815 "state": "online", 00:08:32.815 "raid_level": "raid1", 00:08:32.815 "superblock": true, 00:08:32.815 "num_base_bdevs": 2, 00:08:32.815 "num_base_bdevs_discovered": 2, 00:08:32.815 "num_base_bdevs_operational": 2, 00:08:32.815 "base_bdevs_list": [ 00:08:32.815 { 00:08:32.815 "name": "BaseBdev1", 00:08:32.815 "uuid": "05bbc82d-3092-cf5b-a530-d69ba77c900e", 00:08:32.815 "is_configured": true, 00:08:32.815 "data_offset": 2048, 00:08:32.815 "data_size": 63488 00:08:32.815 }, 00:08:32.815 { 00:08:32.815 "name": "BaseBdev2", 00:08:32.815 "uuid": "1c400643-42e6-c258-b2f5-b10738588e1a", 00:08:32.815 "is_configured": true, 00:08:32.815 "data_offset": 2048, 00:08:32.815 "data_size": 63488 00:08:32.815 } 00:08:32.815 ] 00:08:32.815 }' 00:08:32.815 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:32.815 10:12:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.381 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:33.381 10:12:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:33.381 [2024-06-10 10:12:38.799715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b030ec0 00:08:34.315 10:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:34.573 [2024-06-10 10:12:40.096970] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:34.573 [2024-06-10 10:12:40.097088] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.573 [2024-06-10 10:12:40.097234] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x82b030ec0 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.573 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.830 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.830 "name": "raid_bdev1", 00:08:34.830 "uuid": "f6dacae5-2711-11ef-b084-113036b5c18d", 00:08:34.830 "strip_size_kb": 0, 00:08:34.830 "state": "online", 00:08:34.830 "raid_level": "raid1", 00:08:34.830 "superblock": true, 00:08:34.830 "num_base_bdevs": 2, 00:08:34.830 "num_base_bdevs_discovered": 1, 00:08:34.830 "num_base_bdevs_operational": 1, 00:08:34.830 "base_bdevs_list": [ 00:08:34.830 { 00:08:34.830 "name": null, 00:08:34.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.830 "is_configured": false, 00:08:34.830 "data_offset": 2048, 00:08:34.830 "data_size": 63488 00:08:34.830 }, 00:08:34.830 { 00:08:34.830 "name": "BaseBdev2", 00:08:34.830 "uuid": "1c400643-42e6-c258-b2f5-b10738588e1a", 00:08:34.830 "is_configured": true, 00:08:34.830 "data_offset": 2048, 00:08:34.830 "data_size": 63488 00:08:34.830 } 00:08:34.830 ] 00:08:34.830 }' 00:08:34.830 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.830 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.088 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:35.346 [2024-06-10 10:12:40.874187] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.346 [2024-06-10 10:12:40.874217] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.346 [2024-06-10 10:12:40.874513] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.346 [2024-06-10 10:12:40.874521] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.346 [2024-06-10 10:12:40.874546] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.346 [2024-06-10 10:12:40.874551] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82afc4f00 name raid_bdev1, state offline 00:08:35.346 0 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 52592 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 52592 ']' 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 52592 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 52592 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:08:35.346 killing process with pid 52592 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 52592' 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 52592 00:08:35.346 [2024-06-10 10:12:40.905558] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.346 10:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 52592 00:08:35.346 [2024-06-10 10:12:40.915138] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.HM9l122Z 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:35.605 00:08:35.605 real 0m5.950s 00:08:35.605 user 0m9.206s 00:08:35.605 sys 0m0.971s 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:35.605 10:12:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.605 ************************************ 00:08:35.605 END TEST raid_write_error_test 00:08:35.605 ************************************ 00:08:35.605 10:12:41 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:08:35.605 10:12:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:35.605 10:12:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:35.605 10:12:41 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:08:35.605 10:12:41 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:35.605 10:12:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.605 ************************************ 00:08:35.605 START TEST raid_state_function_test 00:08:35.605 ************************************ 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 3 false 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=52714 00:08:35.605 Process raid pid: 52714 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52714' 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 52714 /var/tmp/spdk-raid.sock 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 52714 ']' 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:08:35.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:08:35.605 10:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.605 [2024-06-10 10:12:41.155899] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:08:35.605 [2024-06-10 10:12:41.156160] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:36.171 EAL: TSC is not safe to use in SMP mode 00:08:36.171 EAL: TSC is not invariant 00:08:36.171 [2024-06-10 10:12:41.656448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.171 [2024-06-10 10:12:41.762126] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:36.171 [2024-06-10 10:12:41.765442] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.171 [2024-06-10 10:12:41.766584] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.171 [2024-06-10 10:12:41.766608] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.735 10:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:08:36.735 10:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:08:36.735 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:37.249 [2024-06-10 10:12:42.477257] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.249 [2024-06-10 10:12:42.477302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.249 [2024-06-10 10:12:42.477307] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.249 [2024-06-10 10:12:42.477315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.249 [2024-06-10 10:12:42.477318] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.249 [2024-06-10 10:12:42.477324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:37.249 "name": "Existed_Raid", 00:08:37.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.249 "strip_size_kb": 64, 00:08:37.249 "state": "configuring", 00:08:37.249 "raid_level": "raid0", 00:08:37.249 "superblock": false, 00:08:37.249 "num_base_bdevs": 3, 00:08:37.249 "num_base_bdevs_discovered": 0, 00:08:37.249 "num_base_bdevs_operational": 3, 00:08:37.249 "base_bdevs_list": [ 00:08:37.249 { 00:08:37.249 "name": "BaseBdev1", 00:08:37.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.249 "is_configured": false, 00:08:37.249 "data_offset": 0, 00:08:37.249 "data_size": 0 00:08:37.249 }, 00:08:37.249 { 00:08:37.249 "name": "BaseBdev2", 00:08:37.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.249 "is_configured": false, 00:08:37.249 "data_offset": 0, 00:08:37.249 "data_size": 0 00:08:37.249 }, 00:08:37.249 { 00:08:37.249 "name": "BaseBdev3", 00:08:37.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.249 "is_configured": false, 00:08:37.249 "data_offset": 0, 00:08:37.249 "data_size": 0 00:08:37.249 } 00:08:37.249 ] 00:08:37.249 }' 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:37.249 10:12:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.815 10:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:38.073 [2024-06-10 10:12:43.509311] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.073 [2024-06-10 10:12:43.509337] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2ad500 name Existed_Raid, state configuring 00:08:38.073 10:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:38.331 [2024-06-10 10:12:43.845344] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.331 [2024-06-10 10:12:43.845384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.331 [2024-06-10 10:12:43.845388] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.331 [2024-06-10 10:12:43.845396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.331 [2024-06-10 10:12:43.845423] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.331 [2024-06-10 10:12:43.845431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.331 10:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:38.589 [2024-06-10 10:12:44.150296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.589 BaseBdev1 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:38.589 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:38.846 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.104 [ 00:08:39.104 { 00:08:39.104 "name": "BaseBdev1", 00:08:39.104 "aliases": [ 00:08:39.104 "fa724103-2711-11ef-b084-113036b5c18d" 00:08:39.104 ], 00:08:39.104 "product_name": "Malloc disk", 00:08:39.104 "block_size": 512, 00:08:39.104 "num_blocks": 65536, 00:08:39.104 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:39.104 "assigned_rate_limits": { 00:08:39.104 "rw_ios_per_sec": 0, 00:08:39.104 "rw_mbytes_per_sec": 0, 00:08:39.104 "r_mbytes_per_sec": 0, 00:08:39.104 "w_mbytes_per_sec": 0 00:08:39.104 }, 00:08:39.104 "claimed": true, 00:08:39.104 "claim_type": "exclusive_write", 00:08:39.104 "zoned": false, 00:08:39.104 "supported_io_types": { 00:08:39.104 "read": true, 00:08:39.104 "write": true, 00:08:39.104 "unmap": true, 00:08:39.104 "write_zeroes": true, 00:08:39.104 "flush": true, 00:08:39.104 "reset": true, 00:08:39.104 "compare": false, 00:08:39.104 "compare_and_write": false, 00:08:39.104 "abort": true, 00:08:39.104 "nvme_admin": false, 00:08:39.104 "nvme_io": false 00:08:39.104 }, 00:08:39.104 "memory_domains": [ 00:08:39.104 { 00:08:39.104 "dma_device_id": "system", 00:08:39.104 "dma_device_type": 1 00:08:39.104 }, 00:08:39.104 { 00:08:39.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.104 "dma_device_type": 2 00:08:39.104 } 00:08:39.104 ], 00:08:39.104 "driver_specific": {} 00:08:39.104 } 00:08:39.104 ] 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.362 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.620 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:39.620 "name": "Existed_Raid", 00:08:39.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.620 "strip_size_kb": 64, 00:08:39.620 "state": "configuring", 00:08:39.620 "raid_level": "raid0", 00:08:39.620 "superblock": false, 00:08:39.620 "num_base_bdevs": 3, 00:08:39.620 "num_base_bdevs_discovered": 1, 00:08:39.620 "num_base_bdevs_operational": 3, 00:08:39.620 "base_bdevs_list": [ 00:08:39.620 { 00:08:39.620 "name": "BaseBdev1", 00:08:39.620 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:39.620 "is_configured": true, 00:08:39.620 "data_offset": 0, 00:08:39.620 "data_size": 65536 00:08:39.620 }, 00:08:39.620 { 00:08:39.620 "name": "BaseBdev2", 00:08:39.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.620 "is_configured": false, 00:08:39.620 "data_offset": 0, 00:08:39.620 "data_size": 0 00:08:39.620 }, 00:08:39.620 { 00:08:39.620 "name": "BaseBdev3", 00:08:39.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.620 "is_configured": false, 00:08:39.620 "data_offset": 0, 00:08:39.620 "data_size": 0 00:08:39.620 } 00:08:39.620 ] 00:08:39.620 }' 00:08:39.620 10:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:39.620 10:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.878 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:40.137 [2024-06-10 10:12:45.585488] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.137 [2024-06-10 10:12:45.585521] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2ad500 name Existed_Raid, state configuring 00:08:40.137 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:40.395 [2024-06-10 10:12:45.801501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.395 [2024-06-10 10:12:45.802193] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.395 [2024-06-10 10:12:45.802232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.395 [2024-06-10 10:12:45.802237] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:40.395 [2024-06-10 10:12:45.802245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.395 10:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.654 10:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:40.654 "name": "Existed_Raid", 00:08:40.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.654 "strip_size_kb": 64, 00:08:40.654 "state": "configuring", 00:08:40.654 "raid_level": "raid0", 00:08:40.654 "superblock": false, 00:08:40.654 "num_base_bdevs": 3, 00:08:40.654 "num_base_bdevs_discovered": 1, 00:08:40.654 "num_base_bdevs_operational": 3, 00:08:40.654 "base_bdevs_list": [ 00:08:40.654 { 00:08:40.654 "name": "BaseBdev1", 00:08:40.654 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:40.654 "is_configured": true, 00:08:40.654 "data_offset": 0, 00:08:40.654 "data_size": 65536 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "name": "BaseBdev2", 00:08:40.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.654 "is_configured": false, 00:08:40.654 "data_offset": 0, 00:08:40.654 "data_size": 0 00:08:40.654 }, 00:08:40.654 { 00:08:40.654 "name": "BaseBdev3", 00:08:40.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.654 "is_configured": false, 00:08:40.654 "data_offset": 0, 00:08:40.654 "data_size": 0 00:08:40.654 } 00:08:40.654 ] 00:08:40.654 }' 00:08:40.654 10:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:40.654 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.912 10:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.171 [2024-06-10 10:12:46.737653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.171 BaseBdev2 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:41.171 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:41.429 10:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.689 [ 00:08:41.689 { 00:08:41.689 "name": "BaseBdev2", 00:08:41.689 "aliases": [ 00:08:41.689 "fbfd2d96-2711-11ef-b084-113036b5c18d" 00:08:41.689 ], 00:08:41.689 "product_name": "Malloc disk", 00:08:41.689 "block_size": 512, 00:08:41.689 "num_blocks": 65536, 00:08:41.689 "uuid": "fbfd2d96-2711-11ef-b084-113036b5c18d", 00:08:41.689 "assigned_rate_limits": { 00:08:41.689 "rw_ios_per_sec": 0, 00:08:41.689 "rw_mbytes_per_sec": 0, 00:08:41.689 "r_mbytes_per_sec": 0, 00:08:41.689 "w_mbytes_per_sec": 0 00:08:41.689 }, 00:08:41.689 "claimed": true, 00:08:41.689 "claim_type": "exclusive_write", 00:08:41.689 "zoned": false, 00:08:41.689 "supported_io_types": { 00:08:41.689 "read": true, 00:08:41.689 "write": true, 00:08:41.689 "unmap": true, 00:08:41.689 "write_zeroes": true, 00:08:41.689 "flush": true, 00:08:41.689 "reset": true, 00:08:41.689 "compare": false, 00:08:41.689 "compare_and_write": false, 00:08:41.689 "abort": true, 00:08:41.689 "nvme_admin": false, 00:08:41.689 "nvme_io": false 00:08:41.689 }, 00:08:41.689 "memory_domains": [ 00:08:41.689 { 00:08:41.689 "dma_device_id": "system", 00:08:41.689 "dma_device_type": 1 00:08:41.689 }, 00:08:41.689 { 00:08:41.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.689 "dma_device_type": 2 00:08:41.689 } 00:08:41.689 ], 00:08:41.689 "driver_specific": {} 00:08:41.689 } 00:08:41.689 ] 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.689 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.948 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:41.948 "name": "Existed_Raid", 00:08:41.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.948 "strip_size_kb": 64, 00:08:41.948 "state": "configuring", 00:08:41.948 "raid_level": "raid0", 00:08:41.948 "superblock": false, 00:08:41.948 "num_base_bdevs": 3, 00:08:41.948 "num_base_bdevs_discovered": 2, 00:08:41.948 "num_base_bdevs_operational": 3, 00:08:41.948 "base_bdevs_list": [ 00:08:41.948 { 00:08:41.948 "name": "BaseBdev1", 00:08:41.948 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:41.948 "is_configured": true, 00:08:41.948 "data_offset": 0, 00:08:41.948 "data_size": 65536 00:08:41.948 }, 00:08:41.948 { 00:08:41.948 "name": "BaseBdev2", 00:08:41.948 "uuid": "fbfd2d96-2711-11ef-b084-113036b5c18d", 00:08:41.948 "is_configured": true, 00:08:41.948 "data_offset": 0, 00:08:41.948 "data_size": 65536 00:08:41.948 }, 00:08:41.948 { 00:08:41.948 "name": "BaseBdev3", 00:08:41.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.948 "is_configured": false, 00:08:41.948 "data_offset": 0, 00:08:41.948 "data_size": 0 00:08:41.948 } 00:08:41.948 ] 00:08:41.948 }' 00:08:41.948 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:41.948 10:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.205 10:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:42.462 [2024-06-10 10:12:48.009844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.462 [2024-06-10 10:12:48.009882] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b2ada00 00:08:42.462 [2024-06-10 10:12:48.009886] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:42.462 [2024-06-10 10:12:48.009922] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b310ec0 00:08:42.462 [2024-06-10 10:12:48.010018] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b2ada00 00:08:42.462 [2024-06-10 10:12:48.010022] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b2ada00 00:08:42.462 [2024-06-10 10:12:48.010050] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.462 BaseBdev3 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:42.462 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:42.720 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:42.979 [ 00:08:42.979 { 00:08:42.979 "name": "BaseBdev3", 00:08:42.979 "aliases": [ 00:08:42.979 "fcbf4ca4-2711-11ef-b084-113036b5c18d" 00:08:42.979 ], 00:08:42.979 "product_name": "Malloc disk", 00:08:42.979 "block_size": 512, 00:08:42.979 "num_blocks": 65536, 00:08:42.979 "uuid": "fcbf4ca4-2711-11ef-b084-113036b5c18d", 00:08:42.979 "assigned_rate_limits": { 00:08:42.979 "rw_ios_per_sec": 0, 00:08:42.979 "rw_mbytes_per_sec": 0, 00:08:42.979 "r_mbytes_per_sec": 0, 00:08:42.979 "w_mbytes_per_sec": 0 00:08:42.979 }, 00:08:42.979 "claimed": true, 00:08:42.979 "claim_type": "exclusive_write", 00:08:42.979 "zoned": false, 00:08:42.979 "supported_io_types": { 00:08:42.979 "read": true, 00:08:42.979 "write": true, 00:08:42.979 "unmap": true, 00:08:42.979 "write_zeroes": true, 00:08:42.979 "flush": true, 00:08:42.979 "reset": true, 00:08:42.979 "compare": false, 00:08:42.979 "compare_and_write": false, 00:08:42.979 "abort": true, 00:08:42.979 "nvme_admin": false, 00:08:42.979 "nvme_io": false 00:08:42.979 }, 00:08:42.979 "memory_domains": [ 00:08:42.979 { 00:08:42.979 "dma_device_id": "system", 00:08:42.979 "dma_device_type": 1 00:08:42.979 }, 00:08:42.979 { 00:08:42.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.979 "dma_device_type": 2 00:08:42.979 } 00:08:42.979 ], 00:08:42.979 "driver_specific": {} 00:08:42.979 } 00:08:42.979 ] 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.979 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.237 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:43.237 "name": "Existed_Raid", 00:08:43.237 "uuid": "fcbf5300-2711-11ef-b084-113036b5c18d", 00:08:43.237 "strip_size_kb": 64, 00:08:43.237 "state": "online", 00:08:43.237 "raid_level": "raid0", 00:08:43.237 "superblock": false, 00:08:43.237 "num_base_bdevs": 3, 00:08:43.237 "num_base_bdevs_discovered": 3, 00:08:43.237 "num_base_bdevs_operational": 3, 00:08:43.237 "base_bdevs_list": [ 00:08:43.237 { 00:08:43.237 "name": "BaseBdev1", 00:08:43.238 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:43.238 "is_configured": true, 00:08:43.238 "data_offset": 0, 00:08:43.238 "data_size": 65536 00:08:43.238 }, 00:08:43.238 { 00:08:43.238 "name": "BaseBdev2", 00:08:43.238 "uuid": "fbfd2d96-2711-11ef-b084-113036b5c18d", 00:08:43.238 "is_configured": true, 00:08:43.238 "data_offset": 0, 00:08:43.238 "data_size": 65536 00:08:43.238 }, 00:08:43.238 { 00:08:43.238 "name": "BaseBdev3", 00:08:43.238 "uuid": "fcbf4ca4-2711-11ef-b084-113036b5c18d", 00:08:43.238 "is_configured": true, 00:08:43.238 "data_offset": 0, 00:08:43.238 "data_size": 65536 00:08:43.238 } 00:08:43.238 ] 00:08:43.238 }' 00:08:43.238 10:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:43.238 10:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.496 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:43.754 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:44.076 [2024-06-10 10:12:49.361845] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.076 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:44.076 "name": "Existed_Raid", 00:08:44.076 "aliases": [ 00:08:44.076 "fcbf5300-2711-11ef-b084-113036b5c18d" 00:08:44.076 ], 00:08:44.076 "product_name": "Raid Volume", 00:08:44.076 "block_size": 512, 00:08:44.076 "num_blocks": 196608, 00:08:44.076 "uuid": "fcbf5300-2711-11ef-b084-113036b5c18d", 00:08:44.076 "assigned_rate_limits": { 00:08:44.076 "rw_ios_per_sec": 0, 00:08:44.076 "rw_mbytes_per_sec": 0, 00:08:44.076 "r_mbytes_per_sec": 0, 00:08:44.076 "w_mbytes_per_sec": 0 00:08:44.076 }, 00:08:44.076 "claimed": false, 00:08:44.076 "zoned": false, 00:08:44.076 "supported_io_types": { 00:08:44.076 "read": true, 00:08:44.076 "write": true, 00:08:44.076 "unmap": true, 00:08:44.076 "write_zeroes": true, 00:08:44.076 "flush": true, 00:08:44.076 "reset": true, 00:08:44.076 "compare": false, 00:08:44.076 "compare_and_write": false, 00:08:44.076 "abort": false, 00:08:44.076 "nvme_admin": false, 00:08:44.076 "nvme_io": false 00:08:44.076 }, 00:08:44.076 "memory_domains": [ 00:08:44.076 { 00:08:44.076 "dma_device_id": "system", 00:08:44.076 "dma_device_type": 1 00:08:44.076 }, 00:08:44.076 { 00:08:44.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.076 "dma_device_type": 2 00:08:44.076 }, 00:08:44.076 { 00:08:44.076 "dma_device_id": "system", 00:08:44.076 "dma_device_type": 1 00:08:44.076 }, 00:08:44.076 { 00:08:44.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.076 "dma_device_type": 2 00:08:44.076 }, 00:08:44.076 { 00:08:44.076 "dma_device_id": "system", 00:08:44.076 "dma_device_type": 1 00:08:44.076 }, 00:08:44.076 { 00:08:44.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.076 "dma_device_type": 2 00:08:44.076 } 00:08:44.076 ], 00:08:44.076 "driver_specific": { 00:08:44.076 "raid": { 00:08:44.076 "uuid": "fcbf5300-2711-11ef-b084-113036b5c18d", 00:08:44.076 "strip_size_kb": 64, 00:08:44.076 "state": "online", 00:08:44.076 "raid_level": "raid0", 00:08:44.076 "superblock": false, 00:08:44.076 "num_base_bdevs": 3, 00:08:44.076 "num_base_bdevs_discovered": 3, 00:08:44.076 "num_base_bdevs_operational": 3, 00:08:44.076 "base_bdevs_list": [ 00:08:44.076 { 00:08:44.077 "name": "BaseBdev1", 00:08:44.077 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:44.077 "is_configured": true, 00:08:44.077 "data_offset": 0, 00:08:44.077 "data_size": 65536 00:08:44.077 }, 00:08:44.077 { 00:08:44.077 "name": "BaseBdev2", 00:08:44.077 "uuid": "fbfd2d96-2711-11ef-b084-113036b5c18d", 00:08:44.077 "is_configured": true, 00:08:44.077 "data_offset": 0, 00:08:44.077 "data_size": 65536 00:08:44.077 }, 00:08:44.077 { 00:08:44.077 "name": "BaseBdev3", 00:08:44.077 "uuid": "fcbf4ca4-2711-11ef-b084-113036b5c18d", 00:08:44.077 "is_configured": true, 00:08:44.077 "data_offset": 0, 00:08:44.077 "data_size": 65536 00:08:44.077 } 00:08:44.077 ] 00:08:44.077 } 00:08:44.077 } 00:08:44.077 }' 00:08:44.077 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.077 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:44.077 BaseBdev2 00:08:44.077 BaseBdev3' 00:08:44.077 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:44.077 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:44.077 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:44.336 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:44.336 "name": "BaseBdev1", 00:08:44.336 "aliases": [ 00:08:44.336 "fa724103-2711-11ef-b084-113036b5c18d" 00:08:44.336 ], 00:08:44.336 "product_name": "Malloc disk", 00:08:44.336 "block_size": 512, 00:08:44.336 "num_blocks": 65536, 00:08:44.336 "uuid": "fa724103-2711-11ef-b084-113036b5c18d", 00:08:44.336 "assigned_rate_limits": { 00:08:44.336 "rw_ios_per_sec": 0, 00:08:44.336 "rw_mbytes_per_sec": 0, 00:08:44.336 "r_mbytes_per_sec": 0, 00:08:44.336 "w_mbytes_per_sec": 0 00:08:44.336 }, 00:08:44.336 "claimed": true, 00:08:44.336 "claim_type": "exclusive_write", 00:08:44.336 "zoned": false, 00:08:44.336 "supported_io_types": { 00:08:44.336 "read": true, 00:08:44.336 "write": true, 00:08:44.336 "unmap": true, 00:08:44.336 "write_zeroes": true, 00:08:44.336 "flush": true, 00:08:44.336 "reset": true, 00:08:44.337 "compare": false, 00:08:44.337 "compare_and_write": false, 00:08:44.337 "abort": true, 00:08:44.337 "nvme_admin": false, 00:08:44.337 "nvme_io": false 00:08:44.337 }, 00:08:44.337 "memory_domains": [ 00:08:44.337 { 00:08:44.337 "dma_device_id": "system", 00:08:44.337 "dma_device_type": 1 00:08:44.337 }, 00:08:44.337 { 00:08:44.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.337 "dma_device_type": 2 00:08:44.337 } 00:08:44.337 ], 00:08:44.337 "driver_specific": {} 00:08:44.337 }' 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:44.337 10:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:44.595 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:44.595 "name": "BaseBdev2", 00:08:44.595 "aliases": [ 00:08:44.595 "fbfd2d96-2711-11ef-b084-113036b5c18d" 00:08:44.595 ], 00:08:44.596 "product_name": "Malloc disk", 00:08:44.596 "block_size": 512, 00:08:44.596 "num_blocks": 65536, 00:08:44.596 "uuid": "fbfd2d96-2711-11ef-b084-113036b5c18d", 00:08:44.596 "assigned_rate_limits": { 00:08:44.596 "rw_ios_per_sec": 0, 00:08:44.596 "rw_mbytes_per_sec": 0, 00:08:44.596 "r_mbytes_per_sec": 0, 00:08:44.596 "w_mbytes_per_sec": 0 00:08:44.596 }, 00:08:44.596 "claimed": true, 00:08:44.596 "claim_type": "exclusive_write", 00:08:44.596 "zoned": false, 00:08:44.596 "supported_io_types": { 00:08:44.596 "read": true, 00:08:44.596 "write": true, 00:08:44.596 "unmap": true, 00:08:44.596 "write_zeroes": true, 00:08:44.596 "flush": true, 00:08:44.596 "reset": true, 00:08:44.596 "compare": false, 00:08:44.596 "compare_and_write": false, 00:08:44.596 "abort": true, 00:08:44.596 "nvme_admin": false, 00:08:44.596 "nvme_io": false 00:08:44.596 }, 00:08:44.596 "memory_domains": [ 00:08:44.596 { 00:08:44.596 "dma_device_id": "system", 00:08:44.596 "dma_device_type": 1 00:08:44.596 }, 00:08:44.596 { 00:08:44.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.596 "dma_device_type": 2 00:08:44.596 } 00:08:44.596 ], 00:08:44.596 "driver_specific": {} 00:08:44.596 }' 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:08:44.596 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:44.854 "name": "BaseBdev3", 00:08:44.854 "aliases": [ 00:08:44.854 "fcbf4ca4-2711-11ef-b084-113036b5c18d" 00:08:44.854 ], 00:08:44.854 "product_name": "Malloc disk", 00:08:44.854 "block_size": 512, 00:08:44.854 "num_blocks": 65536, 00:08:44.854 "uuid": "fcbf4ca4-2711-11ef-b084-113036b5c18d", 00:08:44.854 "assigned_rate_limits": { 00:08:44.854 "rw_ios_per_sec": 0, 00:08:44.854 "rw_mbytes_per_sec": 0, 00:08:44.854 "r_mbytes_per_sec": 0, 00:08:44.854 "w_mbytes_per_sec": 0 00:08:44.854 }, 00:08:44.854 "claimed": true, 00:08:44.854 "claim_type": "exclusive_write", 00:08:44.854 "zoned": false, 00:08:44.854 "supported_io_types": { 00:08:44.854 "read": true, 00:08:44.854 "write": true, 00:08:44.854 "unmap": true, 00:08:44.854 "write_zeroes": true, 00:08:44.854 "flush": true, 00:08:44.854 "reset": true, 00:08:44.854 "compare": false, 00:08:44.854 "compare_and_write": false, 00:08:44.854 "abort": true, 00:08:44.854 "nvme_admin": false, 00:08:44.854 "nvme_io": false 00:08:44.854 }, 00:08:44.854 "memory_domains": [ 00:08:44.854 { 00:08:44.854 "dma_device_id": "system", 00:08:44.854 "dma_device_type": 1 00:08:44.854 }, 00:08:44.854 { 00:08:44.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.854 "dma_device_type": 2 00:08:44.854 } 00:08:44.854 ], 00:08:44.854 "driver_specific": {} 00:08:44.854 }' 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:44.854 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:45.112 [2024-06-10 10:12:50.677810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.112 [2024-06-10 10:12:50.677832] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.112 [2024-06-10 10:12:50.677843] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.112 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.677 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:45.677 "name": "Existed_Raid", 00:08:45.677 "uuid": "fcbf5300-2711-11ef-b084-113036b5c18d", 00:08:45.677 "strip_size_kb": 64, 00:08:45.677 "state": "offline", 00:08:45.677 "raid_level": "raid0", 00:08:45.677 "superblock": false, 00:08:45.677 "num_base_bdevs": 3, 00:08:45.677 "num_base_bdevs_discovered": 2, 00:08:45.677 "num_base_bdevs_operational": 2, 00:08:45.677 "base_bdevs_list": [ 00:08:45.677 { 00:08:45.677 "name": null, 00:08:45.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.677 "is_configured": false, 00:08:45.677 "data_offset": 0, 00:08:45.677 "data_size": 65536 00:08:45.677 }, 00:08:45.677 { 00:08:45.677 "name": "BaseBdev2", 00:08:45.677 "uuid": "fbfd2d96-2711-11ef-b084-113036b5c18d", 00:08:45.677 "is_configured": true, 00:08:45.677 "data_offset": 0, 00:08:45.677 "data_size": 65536 00:08:45.677 }, 00:08:45.677 { 00:08:45.677 "name": "BaseBdev3", 00:08:45.677 "uuid": "fcbf4ca4-2711-11ef-b084-113036b5c18d", 00:08:45.677 "is_configured": true, 00:08:45.677 "data_offset": 0, 00:08:45.677 "data_size": 65536 00:08:45.677 } 00:08:45.677 ] 00:08:45.677 }' 00:08:45.677 10:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:45.677 10:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.935 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:45.935 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:45.935 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.935 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:46.194 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:46.194 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.194 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:46.452 [2024-06-10 10:12:51.802621] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.452 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:46.452 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:46.452 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:46.452 10:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.721 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:46.721 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.721 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:46.981 [2024-06-10 10:12:52.351370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.981 [2024-06-10 10:12:52.351393] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2ada00 name Existed_Raid, state offline 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:46.981 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.240 BaseBdev2 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:47.240 10:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:47.498 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.757 [ 00:08:47.757 { 00:08:47.757 "name": "BaseBdev2", 00:08:47.757 "aliases": [ 00:08:47.757 "ff911a01-2711-11ef-b084-113036b5c18d" 00:08:47.757 ], 00:08:47.757 "product_name": "Malloc disk", 00:08:47.757 "block_size": 512, 00:08:47.757 "num_blocks": 65536, 00:08:47.757 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:47.757 "assigned_rate_limits": { 00:08:47.757 "rw_ios_per_sec": 0, 00:08:47.757 "rw_mbytes_per_sec": 0, 00:08:47.757 "r_mbytes_per_sec": 0, 00:08:47.757 "w_mbytes_per_sec": 0 00:08:47.757 }, 00:08:47.757 "claimed": false, 00:08:47.757 "zoned": false, 00:08:47.757 "supported_io_types": { 00:08:47.757 "read": true, 00:08:47.757 "write": true, 00:08:47.757 "unmap": true, 00:08:47.757 "write_zeroes": true, 00:08:47.757 "flush": true, 00:08:47.757 "reset": true, 00:08:47.757 "compare": false, 00:08:47.757 "compare_and_write": false, 00:08:47.757 "abort": true, 00:08:47.757 "nvme_admin": false, 00:08:47.757 "nvme_io": false 00:08:47.757 }, 00:08:47.757 "memory_domains": [ 00:08:47.757 { 00:08:47.757 "dma_device_id": "system", 00:08:47.757 "dma_device_type": 1 00:08:47.757 }, 00:08:47.757 { 00:08:47.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.757 "dma_device_type": 2 00:08:47.757 } 00:08:47.757 ], 00:08:47.757 "driver_specific": {} 00:08:47.757 } 00:08:47.757 ] 00:08:47.757 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:47.757 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:47.757 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:47.757 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.015 BaseBdev3 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:48.015 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:48.274 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.533 [ 00:08:48.533 { 00:08:48.533 "name": "BaseBdev3", 00:08:48.533 "aliases": [ 00:08:48.533 "ffff93d6-2711-11ef-b084-113036b5c18d" 00:08:48.533 ], 00:08:48.533 "product_name": "Malloc disk", 00:08:48.533 "block_size": 512, 00:08:48.533 "num_blocks": 65536, 00:08:48.533 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:48.533 "assigned_rate_limits": { 00:08:48.533 "rw_ios_per_sec": 0, 00:08:48.533 "rw_mbytes_per_sec": 0, 00:08:48.533 "r_mbytes_per_sec": 0, 00:08:48.533 "w_mbytes_per_sec": 0 00:08:48.533 }, 00:08:48.533 "claimed": false, 00:08:48.533 "zoned": false, 00:08:48.533 "supported_io_types": { 00:08:48.533 "read": true, 00:08:48.533 "write": true, 00:08:48.533 "unmap": true, 00:08:48.533 "write_zeroes": true, 00:08:48.533 "flush": true, 00:08:48.533 "reset": true, 00:08:48.533 "compare": false, 00:08:48.533 "compare_and_write": false, 00:08:48.533 "abort": true, 00:08:48.533 "nvme_admin": false, 00:08:48.533 "nvme_io": false 00:08:48.533 }, 00:08:48.533 "memory_domains": [ 00:08:48.533 { 00:08:48.533 "dma_device_id": "system", 00:08:48.533 "dma_device_type": 1 00:08:48.533 }, 00:08:48.533 { 00:08:48.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.533 "dma_device_type": 2 00:08:48.533 } 00:08:48.533 ], 00:08:48.533 "driver_specific": {} 00:08:48.533 } 00:08:48.533 ] 00:08:48.533 10:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:48.533 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:08:48.533 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:08:48.533 10:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:48.791 [2024-06-10 10:12:54.216231] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.791 [2024-06-10 10:12:54.216282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.791 [2024-06-10 10:12:54.216291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.791 [2024-06-10 10:12:54.216751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.791 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.049 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.049 "name": "Existed_Raid", 00:08:49.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.049 "strip_size_kb": 64, 00:08:49.049 "state": "configuring", 00:08:49.049 "raid_level": "raid0", 00:08:49.049 "superblock": false, 00:08:49.049 "num_base_bdevs": 3, 00:08:49.049 "num_base_bdevs_discovered": 2, 00:08:49.049 "num_base_bdevs_operational": 3, 00:08:49.049 "base_bdevs_list": [ 00:08:49.049 { 00:08:49.049 "name": "BaseBdev1", 00:08:49.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.049 "is_configured": false, 00:08:49.049 "data_offset": 0, 00:08:49.049 "data_size": 0 00:08:49.049 }, 00:08:49.049 { 00:08:49.049 "name": "BaseBdev2", 00:08:49.049 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:49.049 "is_configured": true, 00:08:49.049 "data_offset": 0, 00:08:49.049 "data_size": 65536 00:08:49.049 }, 00:08:49.049 { 00:08:49.049 "name": "BaseBdev3", 00:08:49.049 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:49.049 "is_configured": true, 00:08:49.049 "data_offset": 0, 00:08:49.049 "data_size": 65536 00:08:49.049 } 00:08:49.049 ] 00:08:49.049 }' 00:08:49.049 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.049 10:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.307 10:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:08:49.564 [2024-06-10 10:12:55.080215] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.564 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.822 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.822 "name": "Existed_Raid", 00:08:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.822 "strip_size_kb": 64, 00:08:49.822 "state": "configuring", 00:08:49.822 "raid_level": "raid0", 00:08:49.822 "superblock": false, 00:08:49.822 "num_base_bdevs": 3, 00:08:49.822 "num_base_bdevs_discovered": 1, 00:08:49.822 "num_base_bdevs_operational": 3, 00:08:49.822 "base_bdevs_list": [ 00:08:49.822 { 00:08:49.822 "name": "BaseBdev1", 00:08:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.822 "is_configured": false, 00:08:49.822 "data_offset": 0, 00:08:49.822 "data_size": 0 00:08:49.822 }, 00:08:49.822 { 00:08:49.822 "name": null, 00:08:49.822 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:49.822 "is_configured": false, 00:08:49.822 "data_offset": 0, 00:08:49.822 "data_size": 65536 00:08:49.822 }, 00:08:49.822 { 00:08:49.822 "name": "BaseBdev3", 00:08:49.822 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:49.822 "is_configured": true, 00:08:49.822 "data_offset": 0, 00:08:49.822 "data_size": 65536 00:08:49.822 } 00:08:49.822 ] 00:08:49.822 }' 00:08:49.822 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.822 10:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.388 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.388 10:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:50.646 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:08:50.646 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:50.904 [2024-06-10 10:12:56.272385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.904 BaseBdev1 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:50.904 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:51.162 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.421 [ 00:08:51.421 { 00:08:51.421 "name": "BaseBdev1", 00:08:51.421 "aliases": [ 00:08:51.421 "01ac1066-2712-11ef-b084-113036b5c18d" 00:08:51.421 ], 00:08:51.421 "product_name": "Malloc disk", 00:08:51.421 "block_size": 512, 00:08:51.421 "num_blocks": 65536, 00:08:51.421 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:51.421 "assigned_rate_limits": { 00:08:51.421 "rw_ios_per_sec": 0, 00:08:51.421 "rw_mbytes_per_sec": 0, 00:08:51.421 "r_mbytes_per_sec": 0, 00:08:51.421 "w_mbytes_per_sec": 0 00:08:51.421 }, 00:08:51.421 "claimed": true, 00:08:51.421 "claim_type": "exclusive_write", 00:08:51.421 "zoned": false, 00:08:51.421 "supported_io_types": { 00:08:51.421 "read": true, 00:08:51.421 "write": true, 00:08:51.421 "unmap": true, 00:08:51.421 "write_zeroes": true, 00:08:51.421 "flush": true, 00:08:51.421 "reset": true, 00:08:51.422 "compare": false, 00:08:51.422 "compare_and_write": false, 00:08:51.422 "abort": true, 00:08:51.422 "nvme_admin": false, 00:08:51.422 "nvme_io": false 00:08:51.422 }, 00:08:51.422 "memory_domains": [ 00:08:51.422 { 00:08:51.422 "dma_device_id": "system", 00:08:51.422 "dma_device_type": 1 00:08:51.422 }, 00:08:51.422 { 00:08:51.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.422 "dma_device_type": 2 00:08:51.422 } 00:08:51.422 ], 00:08:51.422 "driver_specific": {} 00:08:51.422 } 00:08:51.422 ] 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.422 10:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.680 10:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:51.680 "name": "Existed_Raid", 00:08:51.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.680 "strip_size_kb": 64, 00:08:51.680 "state": "configuring", 00:08:51.680 "raid_level": "raid0", 00:08:51.680 "superblock": false, 00:08:51.680 "num_base_bdevs": 3, 00:08:51.680 "num_base_bdevs_discovered": 2, 00:08:51.680 "num_base_bdevs_operational": 3, 00:08:51.680 "base_bdevs_list": [ 00:08:51.680 { 00:08:51.680 "name": "BaseBdev1", 00:08:51.680 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:51.680 "is_configured": true, 00:08:51.680 "data_offset": 0, 00:08:51.680 "data_size": 65536 00:08:51.680 }, 00:08:51.680 { 00:08:51.680 "name": null, 00:08:51.680 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:51.680 "is_configured": false, 00:08:51.680 "data_offset": 0, 00:08:51.680 "data_size": 65536 00:08:51.680 }, 00:08:51.680 { 00:08:51.680 "name": "BaseBdev3", 00:08:51.680 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:51.680 "is_configured": true, 00:08:51.680 "data_offset": 0, 00:08:51.680 "data_size": 65536 00:08:51.680 } 00:08:51.680 ] 00:08:51.680 }' 00:08:51.680 10:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:51.680 10:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.938 10:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.939 10:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.197 10:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:08:52.197 10:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:08:52.764 [2024-06-10 10:12:58.064362] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:52.764 "name": "Existed_Raid", 00:08:52.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.764 "strip_size_kb": 64, 00:08:52.764 "state": "configuring", 00:08:52.764 "raid_level": "raid0", 00:08:52.764 "superblock": false, 00:08:52.764 "num_base_bdevs": 3, 00:08:52.764 "num_base_bdevs_discovered": 1, 00:08:52.764 "num_base_bdevs_operational": 3, 00:08:52.764 "base_bdevs_list": [ 00:08:52.764 { 00:08:52.764 "name": "BaseBdev1", 00:08:52.764 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:52.764 "is_configured": true, 00:08:52.764 "data_offset": 0, 00:08:52.764 "data_size": 65536 00:08:52.764 }, 00:08:52.764 { 00:08:52.764 "name": null, 00:08:52.764 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:52.764 "is_configured": false, 00:08:52.764 "data_offset": 0, 00:08:52.764 "data_size": 65536 00:08:52.764 }, 00:08:52.764 { 00:08:52.764 "name": null, 00:08:52.764 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:52.764 "is_configured": false, 00:08:52.764 "data_offset": 0, 00:08:52.764 "data_size": 65536 00:08:52.764 } 00:08:52.764 ] 00:08:52.764 }' 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:52.764 10:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.023 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.023 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.282 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:08:53.282 10:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:53.540 [2024-06-10 10:12:59.040381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.540 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.799 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.799 "name": "Existed_Raid", 00:08:53.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.799 "strip_size_kb": 64, 00:08:53.799 "state": "configuring", 00:08:53.799 "raid_level": "raid0", 00:08:53.799 "superblock": false, 00:08:53.800 "num_base_bdevs": 3, 00:08:53.800 "num_base_bdevs_discovered": 2, 00:08:53.800 "num_base_bdevs_operational": 3, 00:08:53.800 "base_bdevs_list": [ 00:08:53.800 { 00:08:53.800 "name": "BaseBdev1", 00:08:53.800 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:53.800 "is_configured": true, 00:08:53.800 "data_offset": 0, 00:08:53.800 "data_size": 65536 00:08:53.800 }, 00:08:53.800 { 00:08:53.800 "name": null, 00:08:53.800 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:53.800 "is_configured": false, 00:08:53.800 "data_offset": 0, 00:08:53.800 "data_size": 65536 00:08:53.800 }, 00:08:53.800 { 00:08:53.800 "name": "BaseBdev3", 00:08:53.800 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:53.800 "is_configured": true, 00:08:53.800 "data_offset": 0, 00:08:53.800 "data_size": 65536 00:08:53.800 } 00:08:53.800 ] 00:08:53.800 }' 00:08:53.800 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.800 10:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.366 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.366 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.366 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:08:54.366 10:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:54.933 [2024-06-10 10:13:00.272437] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.933 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.191 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:55.191 "name": "Existed_Raid", 00:08:55.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.191 "strip_size_kb": 64, 00:08:55.191 "state": "configuring", 00:08:55.191 "raid_level": "raid0", 00:08:55.191 "superblock": false, 00:08:55.191 "num_base_bdevs": 3, 00:08:55.191 "num_base_bdevs_discovered": 1, 00:08:55.191 "num_base_bdevs_operational": 3, 00:08:55.191 "base_bdevs_list": [ 00:08:55.191 { 00:08:55.191 "name": null, 00:08:55.191 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:55.191 "is_configured": false, 00:08:55.191 "data_offset": 0, 00:08:55.191 "data_size": 65536 00:08:55.191 }, 00:08:55.191 { 00:08:55.191 "name": null, 00:08:55.191 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:55.191 "is_configured": false, 00:08:55.191 "data_offset": 0, 00:08:55.191 "data_size": 65536 00:08:55.191 }, 00:08:55.191 { 00:08:55.191 "name": "BaseBdev3", 00:08:55.191 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:55.191 "is_configured": true, 00:08:55.191 "data_offset": 0, 00:08:55.191 "data_size": 65536 00:08:55.191 } 00:08:55.191 ] 00:08:55.191 }' 00:08:55.191 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:55.191 10:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.449 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:55.449 10:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.708 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:08:55.708 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:55.967 [2024-06-10 10:13:01.409310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.967 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.225 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:56.225 "name": "Existed_Raid", 00:08:56.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.225 "strip_size_kb": 64, 00:08:56.225 "state": "configuring", 00:08:56.225 "raid_level": "raid0", 00:08:56.225 "superblock": false, 00:08:56.226 "num_base_bdevs": 3, 00:08:56.226 "num_base_bdevs_discovered": 2, 00:08:56.226 "num_base_bdevs_operational": 3, 00:08:56.226 "base_bdevs_list": [ 00:08:56.226 { 00:08:56.226 "name": null, 00:08:56.226 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:56.226 "is_configured": false, 00:08:56.226 "data_offset": 0, 00:08:56.226 "data_size": 65536 00:08:56.226 }, 00:08:56.226 { 00:08:56.226 "name": "BaseBdev2", 00:08:56.226 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:56.226 "is_configured": true, 00:08:56.226 "data_offset": 0, 00:08:56.226 "data_size": 65536 00:08:56.226 }, 00:08:56.226 { 00:08:56.226 "name": "BaseBdev3", 00:08:56.226 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:56.226 "is_configured": true, 00:08:56.226 "data_offset": 0, 00:08:56.226 "data_size": 65536 00:08:56.226 } 00:08:56.226 ] 00:08:56.226 }' 00:08:56.226 10:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:56.226 10:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.484 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.484 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.049 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:08:57.049 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.049 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 01ac1066-2712-11ef-b084-113036b5c18d 00:08:57.307 [2024-06-10 10:13:02.869459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:57.307 [2024-06-10 10:13:02.869491] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b2ada00 00:08:57.307 [2024-06-10 10:13:02.869506] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:57.307 [2024-06-10 10:13:02.869525] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b310e20 00:08:57.307 [2024-06-10 10:13:02.869579] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b2ada00 00:08:57.307 [2024-06-10 10:13:02.869583] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b2ada00 00:08:57.307 [2024-06-10 10:13:02.869611] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.307 NewBaseBdev 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:08:57.307 10:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:57.876 [ 00:08:57.876 { 00:08:57.876 "name": "NewBaseBdev", 00:08:57.876 "aliases": [ 00:08:57.876 "01ac1066-2712-11ef-b084-113036b5c18d" 00:08:57.876 ], 00:08:57.876 "product_name": "Malloc disk", 00:08:57.876 "block_size": 512, 00:08:57.876 "num_blocks": 65536, 00:08:57.876 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:57.876 "assigned_rate_limits": { 00:08:57.876 "rw_ios_per_sec": 0, 00:08:57.876 "rw_mbytes_per_sec": 0, 00:08:57.876 "r_mbytes_per_sec": 0, 00:08:57.876 "w_mbytes_per_sec": 0 00:08:57.876 }, 00:08:57.876 "claimed": true, 00:08:57.876 "claim_type": "exclusive_write", 00:08:57.876 "zoned": false, 00:08:57.876 "supported_io_types": { 00:08:57.876 "read": true, 00:08:57.876 "write": true, 00:08:57.876 "unmap": true, 00:08:57.876 "write_zeroes": true, 00:08:57.876 "flush": true, 00:08:57.876 "reset": true, 00:08:57.876 "compare": false, 00:08:57.876 "compare_and_write": false, 00:08:57.876 "abort": true, 00:08:57.876 "nvme_admin": false, 00:08:57.876 "nvme_io": false 00:08:57.876 }, 00:08:57.876 "memory_domains": [ 00:08:57.876 { 00:08:57.876 "dma_device_id": "system", 00:08:57.876 "dma_device_type": 1 00:08:57.876 }, 00:08:57.876 { 00:08:57.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.876 "dma_device_type": 2 00:08:57.876 } 00:08:57.876 ], 00:08:57.876 "driver_specific": {} 00:08:57.876 } 00:08:57.876 ] 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.876 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.135 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:58.135 "name": "Existed_Raid", 00:08:58.135 "uuid": "059ab84c-2712-11ef-b084-113036b5c18d", 00:08:58.135 "strip_size_kb": 64, 00:08:58.135 "state": "online", 00:08:58.135 "raid_level": "raid0", 00:08:58.135 "superblock": false, 00:08:58.135 "num_base_bdevs": 3, 00:08:58.135 "num_base_bdevs_discovered": 3, 00:08:58.135 "num_base_bdevs_operational": 3, 00:08:58.135 "base_bdevs_list": [ 00:08:58.135 { 00:08:58.135 "name": "NewBaseBdev", 00:08:58.135 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:58.135 "is_configured": true, 00:08:58.135 "data_offset": 0, 00:08:58.135 "data_size": 65536 00:08:58.135 }, 00:08:58.135 { 00:08:58.135 "name": "BaseBdev2", 00:08:58.135 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:58.135 "is_configured": true, 00:08:58.135 "data_offset": 0, 00:08:58.135 "data_size": 65536 00:08:58.135 }, 00:08:58.135 { 00:08:58.135 "name": "BaseBdev3", 00:08:58.135 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:58.135 "is_configured": true, 00:08:58.135 "data_offset": 0, 00:08:58.135 "data_size": 65536 00:08:58.135 } 00:08:58.135 ] 00:08:58.135 }' 00:08:58.135 10:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:58.135 10:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:58.768 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:59.026 [2024-06-10 10:13:04.405423] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.026 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:59.026 "name": "Existed_Raid", 00:08:59.026 "aliases": [ 00:08:59.026 "059ab84c-2712-11ef-b084-113036b5c18d" 00:08:59.026 ], 00:08:59.026 "product_name": "Raid Volume", 00:08:59.026 "block_size": 512, 00:08:59.026 "num_blocks": 196608, 00:08:59.026 "uuid": "059ab84c-2712-11ef-b084-113036b5c18d", 00:08:59.026 "assigned_rate_limits": { 00:08:59.026 "rw_ios_per_sec": 0, 00:08:59.026 "rw_mbytes_per_sec": 0, 00:08:59.026 "r_mbytes_per_sec": 0, 00:08:59.026 "w_mbytes_per_sec": 0 00:08:59.026 }, 00:08:59.026 "claimed": false, 00:08:59.026 "zoned": false, 00:08:59.026 "supported_io_types": { 00:08:59.026 "read": true, 00:08:59.026 "write": true, 00:08:59.026 "unmap": true, 00:08:59.026 "write_zeroes": true, 00:08:59.026 "flush": true, 00:08:59.026 "reset": true, 00:08:59.026 "compare": false, 00:08:59.026 "compare_and_write": false, 00:08:59.026 "abort": false, 00:08:59.027 "nvme_admin": false, 00:08:59.027 "nvme_io": false 00:08:59.027 }, 00:08:59.027 "memory_domains": [ 00:08:59.027 { 00:08:59.027 "dma_device_id": "system", 00:08:59.027 "dma_device_type": 1 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.027 "dma_device_type": 2 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "dma_device_id": "system", 00:08:59.027 "dma_device_type": 1 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.027 "dma_device_type": 2 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "dma_device_id": "system", 00:08:59.027 "dma_device_type": 1 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.027 "dma_device_type": 2 00:08:59.027 } 00:08:59.027 ], 00:08:59.027 "driver_specific": { 00:08:59.027 "raid": { 00:08:59.027 "uuid": "059ab84c-2712-11ef-b084-113036b5c18d", 00:08:59.027 "strip_size_kb": 64, 00:08:59.027 "state": "online", 00:08:59.027 "raid_level": "raid0", 00:08:59.027 "superblock": false, 00:08:59.027 "num_base_bdevs": 3, 00:08:59.027 "num_base_bdevs_discovered": 3, 00:08:59.027 "num_base_bdevs_operational": 3, 00:08:59.027 "base_bdevs_list": [ 00:08:59.027 { 00:08:59.027 "name": "NewBaseBdev", 00:08:59.027 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:59.027 "is_configured": true, 00:08:59.027 "data_offset": 0, 00:08:59.027 "data_size": 65536 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "name": "BaseBdev2", 00:08:59.027 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:59.027 "is_configured": true, 00:08:59.027 "data_offset": 0, 00:08:59.027 "data_size": 65536 00:08:59.027 }, 00:08:59.027 { 00:08:59.027 "name": "BaseBdev3", 00:08:59.027 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:08:59.027 "is_configured": true, 00:08:59.027 "data_offset": 0, 00:08:59.027 "data_size": 65536 00:08:59.027 } 00:08:59.027 ] 00:08:59.027 } 00:08:59.027 } 00:08:59.027 }' 00:08:59.027 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.027 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:08:59.027 BaseBdev2 00:08:59.027 BaseBdev3' 00:08:59.027 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:59.027 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:08:59.027 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:59.285 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:59.285 "name": "NewBaseBdev", 00:08:59.285 "aliases": [ 00:08:59.285 "01ac1066-2712-11ef-b084-113036b5c18d" 00:08:59.285 ], 00:08:59.285 "product_name": "Malloc disk", 00:08:59.285 "block_size": 512, 00:08:59.285 "num_blocks": 65536, 00:08:59.285 "uuid": "01ac1066-2712-11ef-b084-113036b5c18d", 00:08:59.285 "assigned_rate_limits": { 00:08:59.285 "rw_ios_per_sec": 0, 00:08:59.285 "rw_mbytes_per_sec": 0, 00:08:59.285 "r_mbytes_per_sec": 0, 00:08:59.285 "w_mbytes_per_sec": 0 00:08:59.285 }, 00:08:59.285 "claimed": true, 00:08:59.285 "claim_type": "exclusive_write", 00:08:59.285 "zoned": false, 00:08:59.285 "supported_io_types": { 00:08:59.285 "read": true, 00:08:59.285 "write": true, 00:08:59.285 "unmap": true, 00:08:59.285 "write_zeroes": true, 00:08:59.285 "flush": true, 00:08:59.285 "reset": true, 00:08:59.285 "compare": false, 00:08:59.285 "compare_and_write": false, 00:08:59.285 "abort": true, 00:08:59.285 "nvme_admin": false, 00:08:59.285 "nvme_io": false 00:08:59.285 }, 00:08:59.285 "memory_domains": [ 00:08:59.285 { 00:08:59.285 "dma_device_id": "system", 00:08:59.285 "dma_device_type": 1 00:08:59.285 }, 00:08:59.285 { 00:08:59.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.285 "dma_device_type": 2 00:08:59.285 } 00:08:59.285 ], 00:08:59.285 "driver_specific": {} 00:08:59.286 }' 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:59.286 10:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:59.851 "name": "BaseBdev2", 00:08:59.851 "aliases": [ 00:08:59.851 "ff911a01-2711-11ef-b084-113036b5c18d" 00:08:59.851 ], 00:08:59.851 "product_name": "Malloc disk", 00:08:59.851 "block_size": 512, 00:08:59.851 "num_blocks": 65536, 00:08:59.851 "uuid": "ff911a01-2711-11ef-b084-113036b5c18d", 00:08:59.851 "assigned_rate_limits": { 00:08:59.851 "rw_ios_per_sec": 0, 00:08:59.851 "rw_mbytes_per_sec": 0, 00:08:59.851 "r_mbytes_per_sec": 0, 00:08:59.851 "w_mbytes_per_sec": 0 00:08:59.851 }, 00:08:59.851 "claimed": true, 00:08:59.851 "claim_type": "exclusive_write", 00:08:59.851 "zoned": false, 00:08:59.851 "supported_io_types": { 00:08:59.851 "read": true, 00:08:59.851 "write": true, 00:08:59.851 "unmap": true, 00:08:59.851 "write_zeroes": true, 00:08:59.851 "flush": true, 00:08:59.851 "reset": true, 00:08:59.851 "compare": false, 00:08:59.851 "compare_and_write": false, 00:08:59.851 "abort": true, 00:08:59.851 "nvme_admin": false, 00:08:59.851 "nvme_io": false 00:08:59.851 }, 00:08:59.851 "memory_domains": [ 00:08:59.851 { 00:08:59.851 "dma_device_id": "system", 00:08:59.851 "dma_device_type": 1 00:08:59.851 }, 00:08:59.851 { 00:08:59.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.851 "dma_device_type": 2 00:08:59.851 } 00:08:59.851 ], 00:08:59.851 "driver_specific": {} 00:08:59.851 }' 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:59.851 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:00.110 "name": "BaseBdev3", 00:09:00.110 "aliases": [ 00:09:00.110 "ffff93d6-2711-11ef-b084-113036b5c18d" 00:09:00.110 ], 00:09:00.110 "product_name": "Malloc disk", 00:09:00.110 "block_size": 512, 00:09:00.110 "num_blocks": 65536, 00:09:00.110 "uuid": "ffff93d6-2711-11ef-b084-113036b5c18d", 00:09:00.110 "assigned_rate_limits": { 00:09:00.110 "rw_ios_per_sec": 0, 00:09:00.110 "rw_mbytes_per_sec": 0, 00:09:00.110 "r_mbytes_per_sec": 0, 00:09:00.110 "w_mbytes_per_sec": 0 00:09:00.110 }, 00:09:00.110 "claimed": true, 00:09:00.110 "claim_type": "exclusive_write", 00:09:00.110 "zoned": false, 00:09:00.110 "supported_io_types": { 00:09:00.110 "read": true, 00:09:00.110 "write": true, 00:09:00.110 "unmap": true, 00:09:00.110 "write_zeroes": true, 00:09:00.110 "flush": true, 00:09:00.110 "reset": true, 00:09:00.110 "compare": false, 00:09:00.110 "compare_and_write": false, 00:09:00.110 "abort": true, 00:09:00.110 "nvme_admin": false, 00:09:00.110 "nvme_io": false 00:09:00.110 }, 00:09:00.110 "memory_domains": [ 00:09:00.110 { 00:09:00.110 "dma_device_id": "system", 00:09:00.110 "dma_device_type": 1 00:09:00.110 }, 00:09:00.110 { 00:09:00.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.110 "dma_device_type": 2 00:09:00.110 } 00:09:00.110 ], 00:09:00.110 "driver_specific": {} 00:09:00.110 }' 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:00.110 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:00.369 [2024-06-10 10:13:05.869418] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:00.369 [2024-06-10 10:13:05.869444] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.369 [2024-06-10 10:13:05.869477] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.369 [2024-06-10 10:13:05.869490] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.369 [2024-06-10 10:13:05.869494] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2ada00 name Existed_Raid, state offline 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 52714 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 52714 ']' 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 52714 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 52714 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:09:00.369 killing process with pid 52714 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 52714' 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 52714 00:09:00.369 10:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 52714 00:09:00.369 [2024-06-10 10:13:05.909501] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.369 [2024-06-10 10:13:05.923967] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:00.628 ************************************ 00:09:00.628 END TEST raid_state_function_test 00:09:00.628 00:09:00.628 real 0m24.959s 00:09:00.628 user 0m45.641s 00:09:00.628 sys 0m3.548s 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 ************************************ 00:09:00.628 10:13:06 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:00.628 10:13:06 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:09:00.628 10:13:06 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:00.628 10:13:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 ************************************ 00:09:00.628 START TEST raid_state_function_test_sb 00:09:00.628 ************************************ 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 3 true 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=53447 00:09:00.628 Process raid pid: 53447 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 53447' 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 53447 /var/tmp/spdk-raid.sock 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 53447 ']' 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:00.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:00.628 10:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 [2024-06-10 10:13:06.165224] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:00.628 [2024-06-10 10:13:06.165544] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:01.563 EAL: TSC is not safe to use in SMP mode 00:09:01.563 EAL: TSC is not invariant 00:09:01.563 [2024-06-10 10:13:06.944296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.563 [2024-06-10 10:13:07.038167] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:01.563 [2024-06-10 10:13:07.040824] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.563 [2024-06-10 10:13:07.041718] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.563 [2024-06-10 10:13:07.041734] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.821 10:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:01.821 10:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:09:01.821 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:02.080 [2024-06-10 10:13:07.557806] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.080 [2024-06-10 10:13:07.557859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.080 [2024-06-10 10:13:07.557863] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.080 [2024-06-10 10:13:07.557871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.080 [2024-06-10 10:13:07.557875] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.080 [2024-06-10 10:13:07.557881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.080 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.339 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:02.339 "name": "Existed_Raid", 00:09:02.339 "uuid": "0866188b-2712-11ef-b084-113036b5c18d", 00:09:02.339 "strip_size_kb": 64, 00:09:02.339 "state": "configuring", 00:09:02.339 "raid_level": "raid0", 00:09:02.339 "superblock": true, 00:09:02.339 "num_base_bdevs": 3, 00:09:02.339 "num_base_bdevs_discovered": 0, 00:09:02.339 "num_base_bdevs_operational": 3, 00:09:02.339 "base_bdevs_list": [ 00:09:02.339 { 00:09:02.339 "name": "BaseBdev1", 00:09:02.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.339 "is_configured": false, 00:09:02.339 "data_offset": 0, 00:09:02.339 "data_size": 0 00:09:02.339 }, 00:09:02.339 { 00:09:02.339 "name": "BaseBdev2", 00:09:02.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.339 "is_configured": false, 00:09:02.339 "data_offset": 0, 00:09:02.339 "data_size": 0 00:09:02.339 }, 00:09:02.339 { 00:09:02.339 "name": "BaseBdev3", 00:09:02.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.339 "is_configured": false, 00:09:02.339 "data_offset": 0, 00:09:02.339 "data_size": 0 00:09:02.339 } 00:09:02.339 ] 00:09:02.339 }' 00:09:02.339 10:13:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:02.339 10:13:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.018 10:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:03.018 [2024-06-10 10:13:08.413801] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.019 [2024-06-10 10:13:08.413824] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bedf500 name Existed_Raid, state configuring 00:09:03.019 10:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:03.278 [2024-06-10 10:13:08.713831] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.278 [2024-06-10 10:13:08.713896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.278 [2024-06-10 10:13:08.713900] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.278 [2024-06-10 10:13:08.713908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.278 [2024-06-10 10:13:08.713911] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.278 [2024-06-10 10:13:08.713917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.278 10:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.538 [2024-06-10 10:13:08.950784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.538 BaseBdev1 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:03.538 10:13:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:03.797 10:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.056 [ 00:09:04.056 { 00:09:04.056 "name": "BaseBdev1", 00:09:04.056 "aliases": [ 00:09:04.056 "093a8116-2712-11ef-b084-113036b5c18d" 00:09:04.056 ], 00:09:04.056 "product_name": "Malloc disk", 00:09:04.056 "block_size": 512, 00:09:04.056 "num_blocks": 65536, 00:09:04.056 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:04.056 "assigned_rate_limits": { 00:09:04.056 "rw_ios_per_sec": 0, 00:09:04.056 "rw_mbytes_per_sec": 0, 00:09:04.056 "r_mbytes_per_sec": 0, 00:09:04.056 "w_mbytes_per_sec": 0 00:09:04.056 }, 00:09:04.056 "claimed": true, 00:09:04.056 "claim_type": "exclusive_write", 00:09:04.056 "zoned": false, 00:09:04.056 "supported_io_types": { 00:09:04.056 "read": true, 00:09:04.056 "write": true, 00:09:04.056 "unmap": true, 00:09:04.056 "write_zeroes": true, 00:09:04.056 "flush": true, 00:09:04.056 "reset": true, 00:09:04.056 "compare": false, 00:09:04.056 "compare_and_write": false, 00:09:04.056 "abort": true, 00:09:04.056 "nvme_admin": false, 00:09:04.056 "nvme_io": false 00:09:04.056 }, 00:09:04.056 "memory_domains": [ 00:09:04.056 { 00:09:04.056 "dma_device_id": "system", 00:09:04.056 "dma_device_type": 1 00:09:04.056 }, 00:09:04.056 { 00:09:04.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.056 "dma_device_type": 2 00:09:04.056 } 00:09:04.056 ], 00:09:04.056 "driver_specific": {} 00:09:04.056 } 00:09:04.056 ] 00:09:04.056 10:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.057 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.316 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:04.316 "name": "Existed_Raid", 00:09:04.316 "uuid": "09167dc8-2712-11ef-b084-113036b5c18d", 00:09:04.316 "strip_size_kb": 64, 00:09:04.316 "state": "configuring", 00:09:04.316 "raid_level": "raid0", 00:09:04.316 "superblock": true, 00:09:04.316 "num_base_bdevs": 3, 00:09:04.316 "num_base_bdevs_discovered": 1, 00:09:04.316 "num_base_bdevs_operational": 3, 00:09:04.316 "base_bdevs_list": [ 00:09:04.316 { 00:09:04.316 "name": "BaseBdev1", 00:09:04.316 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:04.316 "is_configured": true, 00:09:04.316 "data_offset": 2048, 00:09:04.316 "data_size": 63488 00:09:04.316 }, 00:09:04.316 { 00:09:04.316 "name": "BaseBdev2", 00:09:04.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.316 "is_configured": false, 00:09:04.316 "data_offset": 0, 00:09:04.316 "data_size": 0 00:09:04.316 }, 00:09:04.316 { 00:09:04.316 "name": "BaseBdev3", 00:09:04.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.316 "is_configured": false, 00:09:04.316 "data_offset": 0, 00:09:04.316 "data_size": 0 00:09:04.316 } 00:09:04.316 ] 00:09:04.316 }' 00:09:04.316 10:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:04.316 10:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.575 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:04.833 [2024-06-10 10:13:10.349888] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.833 [2024-06-10 10:13:10.349919] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bedf500 name Existed_Raid, state configuring 00:09:04.833 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:05.093 [2024-06-10 10:13:10.621891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.093 [2024-06-10 10:13:10.622604] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.093 [2024-06-10 10:13:10.622643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.093 [2024-06-10 10:13:10.622647] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.093 [2024-06-10 10:13:10.622655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.093 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.351 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:05.351 "name": "Existed_Raid", 00:09:05.351 "uuid": "0a39a359-2712-11ef-b084-113036b5c18d", 00:09:05.351 "strip_size_kb": 64, 00:09:05.351 "state": "configuring", 00:09:05.351 "raid_level": "raid0", 00:09:05.351 "superblock": true, 00:09:05.351 "num_base_bdevs": 3, 00:09:05.351 "num_base_bdevs_discovered": 1, 00:09:05.351 "num_base_bdevs_operational": 3, 00:09:05.351 "base_bdevs_list": [ 00:09:05.351 { 00:09:05.351 "name": "BaseBdev1", 00:09:05.351 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:05.351 "is_configured": true, 00:09:05.351 "data_offset": 2048, 00:09:05.351 "data_size": 63488 00:09:05.351 }, 00:09:05.351 { 00:09:05.351 "name": "BaseBdev2", 00:09:05.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.351 "is_configured": false, 00:09:05.351 "data_offset": 0, 00:09:05.351 "data_size": 0 00:09:05.351 }, 00:09:05.351 { 00:09:05.351 "name": "BaseBdev3", 00:09:05.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.351 "is_configured": false, 00:09:05.351 "data_offset": 0, 00:09:05.351 "data_size": 0 00:09:05.351 } 00:09:05.351 ] 00:09:05.351 }' 00:09:05.351 10:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:05.351 10:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.919 [2024-06-10 10:13:11.458092] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.919 BaseBdev2 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:05.919 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:06.177 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.435 [ 00:09:06.435 { 00:09:06.435 "name": "BaseBdev2", 00:09:06.435 "aliases": [ 00:09:06.435 "0ab93655-2712-11ef-b084-113036b5c18d" 00:09:06.435 ], 00:09:06.435 "product_name": "Malloc disk", 00:09:06.435 "block_size": 512, 00:09:06.435 "num_blocks": 65536, 00:09:06.435 "uuid": "0ab93655-2712-11ef-b084-113036b5c18d", 00:09:06.435 "assigned_rate_limits": { 00:09:06.435 "rw_ios_per_sec": 0, 00:09:06.435 "rw_mbytes_per_sec": 0, 00:09:06.435 "r_mbytes_per_sec": 0, 00:09:06.435 "w_mbytes_per_sec": 0 00:09:06.435 }, 00:09:06.435 "claimed": true, 00:09:06.435 "claim_type": "exclusive_write", 00:09:06.435 "zoned": false, 00:09:06.435 "supported_io_types": { 00:09:06.435 "read": true, 00:09:06.435 "write": true, 00:09:06.435 "unmap": true, 00:09:06.435 "write_zeroes": true, 00:09:06.435 "flush": true, 00:09:06.435 "reset": true, 00:09:06.435 "compare": false, 00:09:06.435 "compare_and_write": false, 00:09:06.435 "abort": true, 00:09:06.435 "nvme_admin": false, 00:09:06.435 "nvme_io": false 00:09:06.435 }, 00:09:06.435 "memory_domains": [ 00:09:06.435 { 00:09:06.435 "dma_device_id": "system", 00:09:06.435 "dma_device_type": 1 00:09:06.435 }, 00:09:06.435 { 00:09:06.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.435 "dma_device_type": 2 00:09:06.435 } 00:09:06.435 ], 00:09:06.435 "driver_specific": {} 00:09:06.435 } 00:09:06.435 ] 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.435 10:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.692 10:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:06.692 "name": "Existed_Raid", 00:09:06.692 "uuid": "0a39a359-2712-11ef-b084-113036b5c18d", 00:09:06.692 "strip_size_kb": 64, 00:09:06.692 "state": "configuring", 00:09:06.692 "raid_level": "raid0", 00:09:06.692 "superblock": true, 00:09:06.692 "num_base_bdevs": 3, 00:09:06.692 "num_base_bdevs_discovered": 2, 00:09:06.692 "num_base_bdevs_operational": 3, 00:09:06.692 "base_bdevs_list": [ 00:09:06.692 { 00:09:06.692 "name": "BaseBdev1", 00:09:06.692 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:06.692 "is_configured": true, 00:09:06.692 "data_offset": 2048, 00:09:06.692 "data_size": 63488 00:09:06.692 }, 00:09:06.692 { 00:09:06.692 "name": "BaseBdev2", 00:09:06.692 "uuid": "0ab93655-2712-11ef-b084-113036b5c18d", 00:09:06.692 "is_configured": true, 00:09:06.692 "data_offset": 2048, 00:09:06.692 "data_size": 63488 00:09:06.692 }, 00:09:06.692 { 00:09:06.692 "name": "BaseBdev3", 00:09:06.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.692 "is_configured": false, 00:09:06.692 "data_offset": 0, 00:09:06.692 "data_size": 0 00:09:06.692 } 00:09:06.692 ] 00:09:06.692 }' 00:09:06.692 10:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:06.693 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.950 10:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.209 [2024-06-10 10:13:12.770104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.209 [2024-06-10 10:13:12.770160] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bedfa00 00:09:07.209 [2024-06-10 10:13:12.770165] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.209 [2024-06-10 10:13:12.770183] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bf42ec0 00:09:07.209 [2024-06-10 10:13:12.770221] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bedfa00 00:09:07.209 [2024-06-10 10:13:12.770224] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bedfa00 00:09:07.209 [2024-06-10 10:13:12.770240] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.209 BaseBdev3 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:07.209 10:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:07.468 10:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:07.727 [ 00:09:07.727 { 00:09:07.727 "name": "BaseBdev3", 00:09:07.727 "aliases": [ 00:09:07.727 "0b816a11-2712-11ef-b084-113036b5c18d" 00:09:07.727 ], 00:09:07.727 "product_name": "Malloc disk", 00:09:07.727 "block_size": 512, 00:09:07.727 "num_blocks": 65536, 00:09:07.727 "uuid": "0b816a11-2712-11ef-b084-113036b5c18d", 00:09:07.727 "assigned_rate_limits": { 00:09:07.727 "rw_ios_per_sec": 0, 00:09:07.727 "rw_mbytes_per_sec": 0, 00:09:07.727 "r_mbytes_per_sec": 0, 00:09:07.727 "w_mbytes_per_sec": 0 00:09:07.727 }, 00:09:07.727 "claimed": true, 00:09:07.727 "claim_type": "exclusive_write", 00:09:07.727 "zoned": false, 00:09:07.727 "supported_io_types": { 00:09:07.727 "read": true, 00:09:07.727 "write": true, 00:09:07.727 "unmap": true, 00:09:07.727 "write_zeroes": true, 00:09:07.727 "flush": true, 00:09:07.727 "reset": true, 00:09:07.727 "compare": false, 00:09:07.727 "compare_and_write": false, 00:09:07.727 "abort": true, 00:09:07.727 "nvme_admin": false, 00:09:07.727 "nvme_io": false 00:09:07.727 }, 00:09:07.727 "memory_domains": [ 00:09:07.727 { 00:09:07.727 "dma_device_id": "system", 00:09:07.727 "dma_device_type": 1 00:09:07.727 }, 00:09:07.727 { 00:09:07.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.727 "dma_device_type": 2 00:09:07.727 } 00:09:07.727 ], 00:09:07.727 "driver_specific": {} 00:09:07.727 } 00:09:07.727 ] 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.727 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.985 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:07.985 "name": "Existed_Raid", 00:09:07.985 "uuid": "0a39a359-2712-11ef-b084-113036b5c18d", 00:09:07.985 "strip_size_kb": 64, 00:09:07.985 "state": "online", 00:09:07.985 "raid_level": "raid0", 00:09:07.985 "superblock": true, 00:09:07.985 "num_base_bdevs": 3, 00:09:07.985 "num_base_bdevs_discovered": 3, 00:09:07.985 "num_base_bdevs_operational": 3, 00:09:07.985 "base_bdevs_list": [ 00:09:07.985 { 00:09:07.985 "name": "BaseBdev1", 00:09:07.985 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:07.985 "is_configured": true, 00:09:07.985 "data_offset": 2048, 00:09:07.985 "data_size": 63488 00:09:07.985 }, 00:09:07.985 { 00:09:07.985 "name": "BaseBdev2", 00:09:07.985 "uuid": "0ab93655-2712-11ef-b084-113036b5c18d", 00:09:07.985 "is_configured": true, 00:09:07.985 "data_offset": 2048, 00:09:07.985 "data_size": 63488 00:09:07.985 }, 00:09:07.985 { 00:09:07.985 "name": "BaseBdev3", 00:09:07.985 "uuid": "0b816a11-2712-11ef-b084-113036b5c18d", 00:09:07.985 "is_configured": true, 00:09:07.985 "data_offset": 2048, 00:09:07.985 "data_size": 63488 00:09:07.985 } 00:09:07.985 ] 00:09:07.985 }' 00:09:07.985 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:07.985 10:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:08.553 10:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:08.812 [2024-06-10 10:13:14.170068] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.812 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:08.812 "name": "Existed_Raid", 00:09:08.812 "aliases": [ 00:09:08.812 "0a39a359-2712-11ef-b084-113036b5c18d" 00:09:08.812 ], 00:09:08.812 "product_name": "Raid Volume", 00:09:08.812 "block_size": 512, 00:09:08.812 "num_blocks": 190464, 00:09:08.812 "uuid": "0a39a359-2712-11ef-b084-113036b5c18d", 00:09:08.812 "assigned_rate_limits": { 00:09:08.812 "rw_ios_per_sec": 0, 00:09:08.812 "rw_mbytes_per_sec": 0, 00:09:08.812 "r_mbytes_per_sec": 0, 00:09:08.812 "w_mbytes_per_sec": 0 00:09:08.812 }, 00:09:08.812 "claimed": false, 00:09:08.812 "zoned": false, 00:09:08.812 "supported_io_types": { 00:09:08.812 "read": true, 00:09:08.812 "write": true, 00:09:08.812 "unmap": true, 00:09:08.812 "write_zeroes": true, 00:09:08.812 "flush": true, 00:09:08.812 "reset": true, 00:09:08.812 "compare": false, 00:09:08.812 "compare_and_write": false, 00:09:08.812 "abort": false, 00:09:08.812 "nvme_admin": false, 00:09:08.812 "nvme_io": false 00:09:08.812 }, 00:09:08.812 "memory_domains": [ 00:09:08.812 { 00:09:08.812 "dma_device_id": "system", 00:09:08.812 "dma_device_type": 1 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.812 "dma_device_type": 2 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "dma_device_id": "system", 00:09:08.812 "dma_device_type": 1 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.812 "dma_device_type": 2 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "dma_device_id": "system", 00:09:08.812 "dma_device_type": 1 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.812 "dma_device_type": 2 00:09:08.812 } 00:09:08.812 ], 00:09:08.812 "driver_specific": { 00:09:08.812 "raid": { 00:09:08.812 "uuid": "0a39a359-2712-11ef-b084-113036b5c18d", 00:09:08.812 "strip_size_kb": 64, 00:09:08.812 "state": "online", 00:09:08.812 "raid_level": "raid0", 00:09:08.812 "superblock": true, 00:09:08.812 "num_base_bdevs": 3, 00:09:08.812 "num_base_bdevs_discovered": 3, 00:09:08.812 "num_base_bdevs_operational": 3, 00:09:08.812 "base_bdevs_list": [ 00:09:08.812 { 00:09:08.812 "name": "BaseBdev1", 00:09:08.812 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:08.812 "is_configured": true, 00:09:08.812 "data_offset": 2048, 00:09:08.812 "data_size": 63488 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "name": "BaseBdev2", 00:09:08.812 "uuid": "0ab93655-2712-11ef-b084-113036b5c18d", 00:09:08.812 "is_configured": true, 00:09:08.812 "data_offset": 2048, 00:09:08.812 "data_size": 63488 00:09:08.812 }, 00:09:08.812 { 00:09:08.812 "name": "BaseBdev3", 00:09:08.812 "uuid": "0b816a11-2712-11ef-b084-113036b5c18d", 00:09:08.812 "is_configured": true, 00:09:08.812 "data_offset": 2048, 00:09:08.812 "data_size": 63488 00:09:08.812 } 00:09:08.812 ] 00:09:08.812 } 00:09:08.812 } 00:09:08.812 }' 00:09:08.812 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.812 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:08.812 BaseBdev2 00:09:08.812 BaseBdev3' 00:09:08.812 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:08.812 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:08.812 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:09.071 "name": "BaseBdev1", 00:09:09.071 "aliases": [ 00:09:09.071 "093a8116-2712-11ef-b084-113036b5c18d" 00:09:09.071 ], 00:09:09.071 "product_name": "Malloc disk", 00:09:09.071 "block_size": 512, 00:09:09.071 "num_blocks": 65536, 00:09:09.071 "uuid": "093a8116-2712-11ef-b084-113036b5c18d", 00:09:09.071 "assigned_rate_limits": { 00:09:09.071 "rw_ios_per_sec": 0, 00:09:09.071 "rw_mbytes_per_sec": 0, 00:09:09.071 "r_mbytes_per_sec": 0, 00:09:09.071 "w_mbytes_per_sec": 0 00:09:09.071 }, 00:09:09.071 "claimed": true, 00:09:09.071 "claim_type": "exclusive_write", 00:09:09.071 "zoned": false, 00:09:09.071 "supported_io_types": { 00:09:09.071 "read": true, 00:09:09.071 "write": true, 00:09:09.071 "unmap": true, 00:09:09.071 "write_zeroes": true, 00:09:09.071 "flush": true, 00:09:09.071 "reset": true, 00:09:09.071 "compare": false, 00:09:09.071 "compare_and_write": false, 00:09:09.071 "abort": true, 00:09:09.071 "nvme_admin": false, 00:09:09.071 "nvme_io": false 00:09:09.071 }, 00:09:09.071 "memory_domains": [ 00:09:09.071 { 00:09:09.071 "dma_device_id": "system", 00:09:09.071 "dma_device_type": 1 00:09:09.071 }, 00:09:09.071 { 00:09:09.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.071 "dma_device_type": 2 00:09:09.071 } 00:09:09.071 ], 00:09:09.071 "driver_specific": {} 00:09:09.071 }' 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:09.071 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:09.330 "name": "BaseBdev2", 00:09:09.330 "aliases": [ 00:09:09.330 "0ab93655-2712-11ef-b084-113036b5c18d" 00:09:09.330 ], 00:09:09.330 "product_name": "Malloc disk", 00:09:09.330 "block_size": 512, 00:09:09.330 "num_blocks": 65536, 00:09:09.330 "uuid": "0ab93655-2712-11ef-b084-113036b5c18d", 00:09:09.330 "assigned_rate_limits": { 00:09:09.330 "rw_ios_per_sec": 0, 00:09:09.330 "rw_mbytes_per_sec": 0, 00:09:09.330 "r_mbytes_per_sec": 0, 00:09:09.330 "w_mbytes_per_sec": 0 00:09:09.330 }, 00:09:09.330 "claimed": true, 00:09:09.330 "claim_type": "exclusive_write", 00:09:09.330 "zoned": false, 00:09:09.330 "supported_io_types": { 00:09:09.330 "read": true, 00:09:09.330 "write": true, 00:09:09.330 "unmap": true, 00:09:09.330 "write_zeroes": true, 00:09:09.330 "flush": true, 00:09:09.330 "reset": true, 00:09:09.330 "compare": false, 00:09:09.330 "compare_and_write": false, 00:09:09.330 "abort": true, 00:09:09.330 "nvme_admin": false, 00:09:09.330 "nvme_io": false 00:09:09.330 }, 00:09:09.330 "memory_domains": [ 00:09:09.330 { 00:09:09.330 "dma_device_id": "system", 00:09:09.330 "dma_device_type": 1 00:09:09.330 }, 00:09:09.330 { 00:09:09.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.330 "dma_device_type": 2 00:09:09.330 } 00:09:09.330 ], 00:09:09.330 "driver_specific": {} 00:09:09.330 }' 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:09.330 10:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:09.589 "name": "BaseBdev3", 00:09:09.589 "aliases": [ 00:09:09.589 "0b816a11-2712-11ef-b084-113036b5c18d" 00:09:09.589 ], 00:09:09.589 "product_name": "Malloc disk", 00:09:09.589 "block_size": 512, 00:09:09.589 "num_blocks": 65536, 00:09:09.589 "uuid": "0b816a11-2712-11ef-b084-113036b5c18d", 00:09:09.589 "assigned_rate_limits": { 00:09:09.589 "rw_ios_per_sec": 0, 00:09:09.589 "rw_mbytes_per_sec": 0, 00:09:09.589 "r_mbytes_per_sec": 0, 00:09:09.589 "w_mbytes_per_sec": 0 00:09:09.589 }, 00:09:09.589 "claimed": true, 00:09:09.589 "claim_type": "exclusive_write", 00:09:09.589 "zoned": false, 00:09:09.589 "supported_io_types": { 00:09:09.589 "read": true, 00:09:09.589 "write": true, 00:09:09.589 "unmap": true, 00:09:09.589 "write_zeroes": true, 00:09:09.589 "flush": true, 00:09:09.589 "reset": true, 00:09:09.589 "compare": false, 00:09:09.589 "compare_and_write": false, 00:09:09.589 "abort": true, 00:09:09.589 "nvme_admin": false, 00:09:09.589 "nvme_io": false 00:09:09.589 }, 00:09:09.589 "memory_domains": [ 00:09:09.589 { 00:09:09.589 "dma_device_id": "system", 00:09:09.589 "dma_device_type": 1 00:09:09.589 }, 00:09:09.589 { 00:09:09.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.589 "dma_device_type": 2 00:09:09.589 } 00:09:09.589 ], 00:09:09.589 "driver_specific": {} 00:09:09.589 }' 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:09.589 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:10.155 [2024-06-10 10:13:15.470083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.155 [2024-06-10 10:13:15.470104] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.155 [2024-06-10 10:13:15.470117] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:10.155 "name": "Existed_Raid", 00:09:10.155 "uuid": "0a39a359-2712-11ef-b084-113036b5c18d", 00:09:10.155 "strip_size_kb": 64, 00:09:10.155 "state": "offline", 00:09:10.155 "raid_level": "raid0", 00:09:10.155 "superblock": true, 00:09:10.155 "num_base_bdevs": 3, 00:09:10.155 "num_base_bdevs_discovered": 2, 00:09:10.155 "num_base_bdevs_operational": 2, 00:09:10.155 "base_bdevs_list": [ 00:09:10.155 { 00:09:10.155 "name": null, 00:09:10.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.155 "is_configured": false, 00:09:10.155 "data_offset": 2048, 00:09:10.155 "data_size": 63488 00:09:10.155 }, 00:09:10.155 { 00:09:10.155 "name": "BaseBdev2", 00:09:10.155 "uuid": "0ab93655-2712-11ef-b084-113036b5c18d", 00:09:10.155 "is_configured": true, 00:09:10.155 "data_offset": 2048, 00:09:10.155 "data_size": 63488 00:09:10.155 }, 00:09:10.155 { 00:09:10.155 "name": "BaseBdev3", 00:09:10.155 "uuid": "0b816a11-2712-11ef-b084-113036b5c18d", 00:09:10.155 "is_configured": true, 00:09:10.155 "data_offset": 2048, 00:09:10.155 "data_size": 63488 00:09:10.155 } 00:09:10.155 ] 00:09:10.155 }' 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:10.155 10:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.414 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:10.414 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:10.414 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.414 10:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:10.671 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:10.671 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.671 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:10.929 [2024-06-10 10:13:16.454834] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.929 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:10.929 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:10.929 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.929 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:11.187 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:11.187 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.187 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:11.445 [2024-06-10 10:13:16.947582] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:11.445 [2024-06-10 10:13:16.947607] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bedfa00 name Existed_Raid, state offline 00:09:11.445 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:11.445 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:11.445 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:11.445 10:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.704 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:11.704 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:11.704 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:11.704 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:11.704 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:11.704 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:11.962 BaseBdev2 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:11.962 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:12.220 10:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.478 [ 00:09:12.478 { 00:09:12.478 "name": "BaseBdev2", 00:09:12.478 "aliases": [ 00:09:12.478 "0e550ad6-2712-11ef-b084-113036b5c18d" 00:09:12.478 ], 00:09:12.478 "product_name": "Malloc disk", 00:09:12.478 "block_size": 512, 00:09:12.478 "num_blocks": 65536, 00:09:12.478 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:12.478 "assigned_rate_limits": { 00:09:12.478 "rw_ios_per_sec": 0, 00:09:12.478 "rw_mbytes_per_sec": 0, 00:09:12.478 "r_mbytes_per_sec": 0, 00:09:12.478 "w_mbytes_per_sec": 0 00:09:12.478 }, 00:09:12.478 "claimed": false, 00:09:12.478 "zoned": false, 00:09:12.478 "supported_io_types": { 00:09:12.478 "read": true, 00:09:12.478 "write": true, 00:09:12.478 "unmap": true, 00:09:12.478 "write_zeroes": true, 00:09:12.478 "flush": true, 00:09:12.478 "reset": true, 00:09:12.478 "compare": false, 00:09:12.478 "compare_and_write": false, 00:09:12.478 "abort": true, 00:09:12.478 "nvme_admin": false, 00:09:12.478 "nvme_io": false 00:09:12.478 }, 00:09:12.478 "memory_domains": [ 00:09:12.478 { 00:09:12.478 "dma_device_id": "system", 00:09:12.478 "dma_device_type": 1 00:09:12.478 }, 00:09:12.478 { 00:09:12.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.478 "dma_device_type": 2 00:09:12.478 } 00:09:12.478 ], 00:09:12.478 "driver_specific": {} 00:09:12.478 } 00:09:12.478 ] 00:09:12.478 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:12.478 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:12.478 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:12.479 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.738 BaseBdev3 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:12.738 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:12.996 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:13.255 [ 00:09:13.255 { 00:09:13.255 "name": "BaseBdev3", 00:09:13.255 "aliases": [ 00:09:13.255 "0ebccdc0-2712-11ef-b084-113036b5c18d" 00:09:13.255 ], 00:09:13.255 "product_name": "Malloc disk", 00:09:13.255 "block_size": 512, 00:09:13.255 "num_blocks": 65536, 00:09:13.255 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:13.255 "assigned_rate_limits": { 00:09:13.255 "rw_ios_per_sec": 0, 00:09:13.255 "rw_mbytes_per_sec": 0, 00:09:13.255 "r_mbytes_per_sec": 0, 00:09:13.255 "w_mbytes_per_sec": 0 00:09:13.255 }, 00:09:13.255 "claimed": false, 00:09:13.255 "zoned": false, 00:09:13.255 "supported_io_types": { 00:09:13.255 "read": true, 00:09:13.255 "write": true, 00:09:13.255 "unmap": true, 00:09:13.255 "write_zeroes": true, 00:09:13.255 "flush": true, 00:09:13.255 "reset": true, 00:09:13.255 "compare": false, 00:09:13.255 "compare_and_write": false, 00:09:13.255 "abort": true, 00:09:13.255 "nvme_admin": false, 00:09:13.255 "nvme_io": false 00:09:13.255 }, 00:09:13.255 "memory_domains": [ 00:09:13.255 { 00:09:13.255 "dma_device_id": "system", 00:09:13.255 "dma_device_type": 1 00:09:13.255 }, 00:09:13.255 { 00:09:13.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.255 "dma_device_type": 2 00:09:13.255 } 00:09:13.255 ], 00:09:13.255 "driver_specific": {} 00:09:13.255 } 00:09:13.255 ] 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:13.255 [2024-06-10 10:13:18.816424] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.255 [2024-06-10 10:13:18.816469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.255 [2024-06-10 10:13:18.816476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.255 [2024-06-10 10:13:18.816867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:13.255 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:13.256 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:13.256 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.256 10:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.513 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.514 "name": "Existed_Raid", 00:09:13.514 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:13.514 "strip_size_kb": 64, 00:09:13.514 "state": "configuring", 00:09:13.514 "raid_level": "raid0", 00:09:13.514 "superblock": true, 00:09:13.514 "num_base_bdevs": 3, 00:09:13.514 "num_base_bdevs_discovered": 2, 00:09:13.514 "num_base_bdevs_operational": 3, 00:09:13.514 "base_bdevs_list": [ 00:09:13.514 { 00:09:13.514 "name": "BaseBdev1", 00:09:13.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.514 "is_configured": false, 00:09:13.514 "data_offset": 0, 00:09:13.514 "data_size": 0 00:09:13.514 }, 00:09:13.514 { 00:09:13.514 "name": "BaseBdev2", 00:09:13.514 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:13.514 "is_configured": true, 00:09:13.514 "data_offset": 2048, 00:09:13.514 "data_size": 63488 00:09:13.514 }, 00:09:13.514 { 00:09:13.514 "name": "BaseBdev3", 00:09:13.514 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:13.514 "is_configured": true, 00:09:13.514 "data_offset": 2048, 00:09:13.514 "data_size": 63488 00:09:13.514 } 00:09:13.514 ] 00:09:13.514 }' 00:09:13.514 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.514 10:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.772 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:14.029 [2024-06-10 10:13:19.616489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.287 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.545 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:14.545 "name": "Existed_Raid", 00:09:14.545 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:14.545 "strip_size_kb": 64, 00:09:14.545 "state": "configuring", 00:09:14.545 "raid_level": "raid0", 00:09:14.545 "superblock": true, 00:09:14.545 "num_base_bdevs": 3, 00:09:14.545 "num_base_bdevs_discovered": 1, 00:09:14.545 "num_base_bdevs_operational": 3, 00:09:14.545 "base_bdevs_list": [ 00:09:14.545 { 00:09:14.545 "name": "BaseBdev1", 00:09:14.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.545 "is_configured": false, 00:09:14.545 "data_offset": 0, 00:09:14.545 "data_size": 0 00:09:14.545 }, 00:09:14.545 { 00:09:14.545 "name": null, 00:09:14.545 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:14.545 "is_configured": false, 00:09:14.545 "data_offset": 2048, 00:09:14.545 "data_size": 63488 00:09:14.545 }, 00:09:14.545 { 00:09:14.545 "name": "BaseBdev3", 00:09:14.545 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:14.545 "is_configured": true, 00:09:14.545 "data_offset": 2048, 00:09:14.545 "data_size": 63488 00:09:14.545 } 00:09:14.545 ] 00:09:14.545 }' 00:09:14.545 10:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:14.545 10:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.803 10:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:14.803 10:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.061 10:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:15.061 10:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.319 [2024-06-10 10:13:20.712652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.319 BaseBdev1 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:15.319 10:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:15.577 10:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.837 [ 00:09:15.837 { 00:09:15.837 "name": "BaseBdev1", 00:09:15.837 "aliases": [ 00:09:15.837 "103d5a1e-2712-11ef-b084-113036b5c18d" 00:09:15.837 ], 00:09:15.837 "product_name": "Malloc disk", 00:09:15.837 "block_size": 512, 00:09:15.837 "num_blocks": 65536, 00:09:15.837 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:15.837 "assigned_rate_limits": { 00:09:15.837 "rw_ios_per_sec": 0, 00:09:15.837 "rw_mbytes_per_sec": 0, 00:09:15.837 "r_mbytes_per_sec": 0, 00:09:15.837 "w_mbytes_per_sec": 0 00:09:15.837 }, 00:09:15.837 "claimed": true, 00:09:15.837 "claim_type": "exclusive_write", 00:09:15.837 "zoned": false, 00:09:15.837 "supported_io_types": { 00:09:15.837 "read": true, 00:09:15.837 "write": true, 00:09:15.837 "unmap": true, 00:09:15.837 "write_zeroes": true, 00:09:15.837 "flush": true, 00:09:15.837 "reset": true, 00:09:15.837 "compare": false, 00:09:15.837 "compare_and_write": false, 00:09:15.837 "abort": true, 00:09:15.837 "nvme_admin": false, 00:09:15.837 "nvme_io": false 00:09:15.837 }, 00:09:15.837 "memory_domains": [ 00:09:15.837 { 00:09:15.837 "dma_device_id": "system", 00:09:15.837 "dma_device_type": 1 00:09:15.837 }, 00:09:15.837 { 00:09:15.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.837 "dma_device_type": 2 00:09:15.837 } 00:09:15.837 ], 00:09:15.837 "driver_specific": {} 00:09:15.837 } 00:09:15.837 ] 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.837 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.096 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:16.096 "name": "Existed_Raid", 00:09:16.096 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:16.096 "strip_size_kb": 64, 00:09:16.096 "state": "configuring", 00:09:16.096 "raid_level": "raid0", 00:09:16.096 "superblock": true, 00:09:16.096 "num_base_bdevs": 3, 00:09:16.096 "num_base_bdevs_discovered": 2, 00:09:16.096 "num_base_bdevs_operational": 3, 00:09:16.096 "base_bdevs_list": [ 00:09:16.096 { 00:09:16.096 "name": "BaseBdev1", 00:09:16.096 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:16.096 "is_configured": true, 00:09:16.096 "data_offset": 2048, 00:09:16.096 "data_size": 63488 00:09:16.096 }, 00:09:16.096 { 00:09:16.096 "name": null, 00:09:16.096 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:16.096 "is_configured": false, 00:09:16.096 "data_offset": 2048, 00:09:16.096 "data_size": 63488 00:09:16.096 }, 00:09:16.096 { 00:09:16.096 "name": "BaseBdev3", 00:09:16.096 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:16.096 "is_configured": true, 00:09:16.096 "data_offset": 2048, 00:09:16.096 "data_size": 63488 00:09:16.096 } 00:09:16.096 ] 00:09:16.096 }' 00:09:16.096 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:16.096 10:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.355 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.355 10:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:16.612 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:16.612 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:16.871 [2024-06-10 10:13:22.428591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.871 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.130 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:17.130 "name": "Existed_Raid", 00:09:17.130 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:17.130 "strip_size_kb": 64, 00:09:17.130 "state": "configuring", 00:09:17.130 "raid_level": "raid0", 00:09:17.130 "superblock": true, 00:09:17.130 "num_base_bdevs": 3, 00:09:17.130 "num_base_bdevs_discovered": 1, 00:09:17.130 "num_base_bdevs_operational": 3, 00:09:17.130 "base_bdevs_list": [ 00:09:17.130 { 00:09:17.130 "name": "BaseBdev1", 00:09:17.130 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:17.130 "is_configured": true, 00:09:17.130 "data_offset": 2048, 00:09:17.130 "data_size": 63488 00:09:17.130 }, 00:09:17.130 { 00:09:17.130 "name": null, 00:09:17.130 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:17.130 "is_configured": false, 00:09:17.130 "data_offset": 2048, 00:09:17.130 "data_size": 63488 00:09:17.130 }, 00:09:17.130 { 00:09:17.130 "name": null, 00:09:17.130 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:17.130 "is_configured": false, 00:09:17.130 "data_offset": 2048, 00:09:17.130 "data_size": 63488 00:09:17.130 } 00:09:17.130 ] 00:09:17.130 }' 00:09:17.130 10:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:17.130 10:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.697 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.697 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:17.955 [2024-06-10 10:13:23.524633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.955 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.522 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.522 "name": "Existed_Raid", 00:09:18.522 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:18.522 "strip_size_kb": 64, 00:09:18.522 "state": "configuring", 00:09:18.522 "raid_level": "raid0", 00:09:18.522 "superblock": true, 00:09:18.522 "num_base_bdevs": 3, 00:09:18.522 "num_base_bdevs_discovered": 2, 00:09:18.522 "num_base_bdevs_operational": 3, 00:09:18.522 "base_bdevs_list": [ 00:09:18.522 { 00:09:18.522 "name": "BaseBdev1", 00:09:18.522 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:18.522 "is_configured": true, 00:09:18.522 "data_offset": 2048, 00:09:18.522 "data_size": 63488 00:09:18.522 }, 00:09:18.522 { 00:09:18.522 "name": null, 00:09:18.522 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:18.522 "is_configured": false, 00:09:18.522 "data_offset": 2048, 00:09:18.522 "data_size": 63488 00:09:18.522 }, 00:09:18.522 { 00:09:18.522 "name": "BaseBdev3", 00:09:18.522 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:18.522 "is_configured": true, 00:09:18.522 "data_offset": 2048, 00:09:18.522 "data_size": 63488 00:09:18.522 } 00:09:18.522 ] 00:09:18.522 }' 00:09:18.522 10:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.522 10:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.781 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.781 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:19.040 [2024-06-10 10:13:24.616658] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.040 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.299 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.299 "name": "Existed_Raid", 00:09:19.299 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:19.299 "strip_size_kb": 64, 00:09:19.299 "state": "configuring", 00:09:19.299 "raid_level": "raid0", 00:09:19.299 "superblock": true, 00:09:19.299 "num_base_bdevs": 3, 00:09:19.299 "num_base_bdevs_discovered": 1, 00:09:19.299 "num_base_bdevs_operational": 3, 00:09:19.299 "base_bdevs_list": [ 00:09:19.299 { 00:09:19.299 "name": null, 00:09:19.299 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:19.299 "is_configured": false, 00:09:19.299 "data_offset": 2048, 00:09:19.299 "data_size": 63488 00:09:19.299 }, 00:09:19.299 { 00:09:19.299 "name": null, 00:09:19.299 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:19.299 "is_configured": false, 00:09:19.300 "data_offset": 2048, 00:09:19.300 "data_size": 63488 00:09:19.300 }, 00:09:19.300 { 00:09:19.300 "name": "BaseBdev3", 00:09:19.300 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:19.300 "is_configured": true, 00:09:19.300 "data_offset": 2048, 00:09:19.300 "data_size": 63488 00:09:19.300 } 00:09:19.300 ] 00:09:19.300 }' 00:09:19.300 10:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.300 10:13:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.865 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.865 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.865 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:19.865 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.124 [2024-06-10 10:13:25.721427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.383 10:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.642 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:20.642 "name": "Existed_Raid", 00:09:20.642 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:20.642 "strip_size_kb": 64, 00:09:20.642 "state": "configuring", 00:09:20.642 "raid_level": "raid0", 00:09:20.642 "superblock": true, 00:09:20.642 "num_base_bdevs": 3, 00:09:20.642 "num_base_bdevs_discovered": 2, 00:09:20.642 "num_base_bdevs_operational": 3, 00:09:20.642 "base_bdevs_list": [ 00:09:20.642 { 00:09:20.642 "name": null, 00:09:20.642 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:20.642 "is_configured": false, 00:09:20.642 "data_offset": 2048, 00:09:20.642 "data_size": 63488 00:09:20.642 }, 00:09:20.642 { 00:09:20.642 "name": "BaseBdev2", 00:09:20.642 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:20.642 "is_configured": true, 00:09:20.642 "data_offset": 2048, 00:09:20.642 "data_size": 63488 00:09:20.642 }, 00:09:20.642 { 00:09:20.642 "name": "BaseBdev3", 00:09:20.642 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:20.642 "is_configured": true, 00:09:20.642 "data_offset": 2048, 00:09:20.642 "data_size": 63488 00:09:20.642 } 00:09:20.642 ] 00:09:20.642 }' 00:09:20.642 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:20.642 10:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.901 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.901 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.159 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:21.159 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.159 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:21.417 10:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 103d5a1e-2712-11ef-b084-113036b5c18d 00:09:21.676 [2024-06-10 10:13:27.125583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:21.676 [2024-06-10 10:13:27.125628] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bedfa00 00:09:21.676 [2024-06-10 10:13:27.125633] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.676 [2024-06-10 10:13:27.125651] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bf42e20 00:09:21.676 [2024-06-10 10:13:27.125685] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bedfa00 00:09:21.676 [2024-06-10 10:13:27.125688] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bedfa00 00:09:21.676 [2024-06-10 10:13:27.125704] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.676 NewBaseBdev 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:21.676 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:21.935 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:22.193 [ 00:09:22.193 { 00:09:22.193 "name": "NewBaseBdev", 00:09:22.193 "aliases": [ 00:09:22.193 "103d5a1e-2712-11ef-b084-113036b5c18d" 00:09:22.193 ], 00:09:22.193 "product_name": "Malloc disk", 00:09:22.193 "block_size": 512, 00:09:22.193 "num_blocks": 65536, 00:09:22.193 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:22.193 "assigned_rate_limits": { 00:09:22.193 "rw_ios_per_sec": 0, 00:09:22.193 "rw_mbytes_per_sec": 0, 00:09:22.193 "r_mbytes_per_sec": 0, 00:09:22.193 "w_mbytes_per_sec": 0 00:09:22.193 }, 00:09:22.193 "claimed": true, 00:09:22.193 "claim_type": "exclusive_write", 00:09:22.193 "zoned": false, 00:09:22.193 "supported_io_types": { 00:09:22.193 "read": true, 00:09:22.193 "write": true, 00:09:22.193 "unmap": true, 00:09:22.193 "write_zeroes": true, 00:09:22.193 "flush": true, 00:09:22.193 "reset": true, 00:09:22.193 "compare": false, 00:09:22.193 "compare_and_write": false, 00:09:22.193 "abort": true, 00:09:22.193 "nvme_admin": false, 00:09:22.193 "nvme_io": false 00:09:22.193 }, 00:09:22.193 "memory_domains": [ 00:09:22.193 { 00:09:22.193 "dma_device_id": "system", 00:09:22.193 "dma_device_type": 1 00:09:22.193 }, 00:09:22.193 { 00:09:22.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.193 "dma_device_type": 2 00:09:22.193 } 00:09:22.193 ], 00:09:22.193 "driver_specific": {} 00:09:22.193 } 00:09:22.193 ] 00:09:22.193 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:09:22.193 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.194 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.452 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:22.452 "name": "Existed_Raid", 00:09:22.452 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:22.452 "strip_size_kb": 64, 00:09:22.452 "state": "online", 00:09:22.452 "raid_level": "raid0", 00:09:22.452 "superblock": true, 00:09:22.452 "num_base_bdevs": 3, 00:09:22.452 "num_base_bdevs_discovered": 3, 00:09:22.452 "num_base_bdevs_operational": 3, 00:09:22.452 "base_bdevs_list": [ 00:09:22.452 { 00:09:22.452 "name": "NewBaseBdev", 00:09:22.452 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:22.452 "is_configured": true, 00:09:22.452 "data_offset": 2048, 00:09:22.452 "data_size": 63488 00:09:22.452 }, 00:09:22.452 { 00:09:22.452 "name": "BaseBdev2", 00:09:22.452 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:22.452 "is_configured": true, 00:09:22.452 "data_offset": 2048, 00:09:22.452 "data_size": 63488 00:09:22.452 }, 00:09:22.452 { 00:09:22.452 "name": "BaseBdev3", 00:09:22.452 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:22.452 "is_configured": true, 00:09:22.452 "data_offset": 2048, 00:09:22.452 "data_size": 63488 00:09:22.452 } 00:09:22.452 ] 00:09:22.452 }' 00:09:22.452 10:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:22.452 10:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:22.710 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:22.968 [2024-06-10 10:13:28.437532] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.968 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:22.968 "name": "Existed_Raid", 00:09:22.968 "aliases": [ 00:09:22.968 "0f1c0667-2712-11ef-b084-113036b5c18d" 00:09:22.968 ], 00:09:22.968 "product_name": "Raid Volume", 00:09:22.968 "block_size": 512, 00:09:22.968 "num_blocks": 190464, 00:09:22.968 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:22.968 "assigned_rate_limits": { 00:09:22.968 "rw_ios_per_sec": 0, 00:09:22.968 "rw_mbytes_per_sec": 0, 00:09:22.968 "r_mbytes_per_sec": 0, 00:09:22.968 "w_mbytes_per_sec": 0 00:09:22.968 }, 00:09:22.968 "claimed": false, 00:09:22.968 "zoned": false, 00:09:22.968 "supported_io_types": { 00:09:22.968 "read": true, 00:09:22.968 "write": true, 00:09:22.968 "unmap": true, 00:09:22.968 "write_zeroes": true, 00:09:22.968 "flush": true, 00:09:22.968 "reset": true, 00:09:22.968 "compare": false, 00:09:22.968 "compare_and_write": false, 00:09:22.968 "abort": false, 00:09:22.968 "nvme_admin": false, 00:09:22.968 "nvme_io": false 00:09:22.968 }, 00:09:22.968 "memory_domains": [ 00:09:22.968 { 00:09:22.968 "dma_device_id": "system", 00:09:22.968 "dma_device_type": 1 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.968 "dma_device_type": 2 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "dma_device_id": "system", 00:09:22.968 "dma_device_type": 1 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.968 "dma_device_type": 2 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "dma_device_id": "system", 00:09:22.968 "dma_device_type": 1 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.968 "dma_device_type": 2 00:09:22.968 } 00:09:22.968 ], 00:09:22.968 "driver_specific": { 00:09:22.968 "raid": { 00:09:22.968 "uuid": "0f1c0667-2712-11ef-b084-113036b5c18d", 00:09:22.968 "strip_size_kb": 64, 00:09:22.968 "state": "online", 00:09:22.968 "raid_level": "raid0", 00:09:22.968 "superblock": true, 00:09:22.968 "num_base_bdevs": 3, 00:09:22.968 "num_base_bdevs_discovered": 3, 00:09:22.968 "num_base_bdevs_operational": 3, 00:09:22.968 "base_bdevs_list": [ 00:09:22.968 { 00:09:22.968 "name": "NewBaseBdev", 00:09:22.968 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:22.968 "is_configured": true, 00:09:22.968 "data_offset": 2048, 00:09:22.968 "data_size": 63488 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "name": "BaseBdev2", 00:09:22.968 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:22.968 "is_configured": true, 00:09:22.968 "data_offset": 2048, 00:09:22.968 "data_size": 63488 00:09:22.968 }, 00:09:22.968 { 00:09:22.968 "name": "BaseBdev3", 00:09:22.968 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:22.968 "is_configured": true, 00:09:22.968 "data_offset": 2048, 00:09:22.968 "data_size": 63488 00:09:22.968 } 00:09:22.968 ] 00:09:22.968 } 00:09:22.968 } 00:09:22.968 }' 00:09:22.968 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.968 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:22.968 BaseBdev2 00:09:22.968 BaseBdev3' 00:09:22.968 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:22.968 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:22.968 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:23.227 "name": "NewBaseBdev", 00:09:23.227 "aliases": [ 00:09:23.227 "103d5a1e-2712-11ef-b084-113036b5c18d" 00:09:23.227 ], 00:09:23.227 "product_name": "Malloc disk", 00:09:23.227 "block_size": 512, 00:09:23.227 "num_blocks": 65536, 00:09:23.227 "uuid": "103d5a1e-2712-11ef-b084-113036b5c18d", 00:09:23.227 "assigned_rate_limits": { 00:09:23.227 "rw_ios_per_sec": 0, 00:09:23.227 "rw_mbytes_per_sec": 0, 00:09:23.227 "r_mbytes_per_sec": 0, 00:09:23.227 "w_mbytes_per_sec": 0 00:09:23.227 }, 00:09:23.227 "claimed": true, 00:09:23.227 "claim_type": "exclusive_write", 00:09:23.227 "zoned": false, 00:09:23.227 "supported_io_types": { 00:09:23.227 "read": true, 00:09:23.227 "write": true, 00:09:23.227 "unmap": true, 00:09:23.227 "write_zeroes": true, 00:09:23.227 "flush": true, 00:09:23.227 "reset": true, 00:09:23.227 "compare": false, 00:09:23.227 "compare_and_write": false, 00:09:23.227 "abort": true, 00:09:23.227 "nvme_admin": false, 00:09:23.227 "nvme_io": false 00:09:23.227 }, 00:09:23.227 "memory_domains": [ 00:09:23.227 { 00:09:23.227 "dma_device_id": "system", 00:09:23.227 "dma_device_type": 1 00:09:23.227 }, 00:09:23.227 { 00:09:23.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.227 "dma_device_type": 2 00:09:23.227 } 00:09:23.227 ], 00:09:23.227 "driver_specific": {} 00:09:23.227 }' 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:23.227 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:23.485 "name": "BaseBdev2", 00:09:23.485 "aliases": [ 00:09:23.485 "0e550ad6-2712-11ef-b084-113036b5c18d" 00:09:23.485 ], 00:09:23.485 "product_name": "Malloc disk", 00:09:23.485 "block_size": 512, 00:09:23.485 "num_blocks": 65536, 00:09:23.485 "uuid": "0e550ad6-2712-11ef-b084-113036b5c18d", 00:09:23.485 "assigned_rate_limits": { 00:09:23.485 "rw_ios_per_sec": 0, 00:09:23.485 "rw_mbytes_per_sec": 0, 00:09:23.485 "r_mbytes_per_sec": 0, 00:09:23.485 "w_mbytes_per_sec": 0 00:09:23.485 }, 00:09:23.485 "claimed": true, 00:09:23.485 "claim_type": "exclusive_write", 00:09:23.485 "zoned": false, 00:09:23.485 "supported_io_types": { 00:09:23.485 "read": true, 00:09:23.485 "write": true, 00:09:23.485 "unmap": true, 00:09:23.485 "write_zeroes": true, 00:09:23.485 "flush": true, 00:09:23.485 "reset": true, 00:09:23.485 "compare": false, 00:09:23.485 "compare_and_write": false, 00:09:23.485 "abort": true, 00:09:23.485 "nvme_admin": false, 00:09:23.485 "nvme_io": false 00:09:23.485 }, 00:09:23.485 "memory_domains": [ 00:09:23.485 { 00:09:23.485 "dma_device_id": "system", 00:09:23.485 "dma_device_type": 1 00:09:23.485 }, 00:09:23.485 { 00:09:23.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.485 "dma_device_type": 2 00:09:23.485 } 00:09:23.485 ], 00:09:23.485 "driver_specific": {} 00:09:23.485 }' 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:23.485 10:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:23.742 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:23.742 "name": "BaseBdev3", 00:09:23.742 "aliases": [ 00:09:23.742 "0ebccdc0-2712-11ef-b084-113036b5c18d" 00:09:23.742 ], 00:09:23.742 "product_name": "Malloc disk", 00:09:23.742 "block_size": 512, 00:09:23.742 "num_blocks": 65536, 00:09:23.742 "uuid": "0ebccdc0-2712-11ef-b084-113036b5c18d", 00:09:23.742 "assigned_rate_limits": { 00:09:23.742 "rw_ios_per_sec": 0, 00:09:23.742 "rw_mbytes_per_sec": 0, 00:09:23.742 "r_mbytes_per_sec": 0, 00:09:23.742 "w_mbytes_per_sec": 0 00:09:23.742 }, 00:09:23.742 "claimed": true, 00:09:23.743 "claim_type": "exclusive_write", 00:09:23.743 "zoned": false, 00:09:23.743 "supported_io_types": { 00:09:23.743 "read": true, 00:09:23.743 "write": true, 00:09:23.743 "unmap": true, 00:09:23.743 "write_zeroes": true, 00:09:23.743 "flush": true, 00:09:23.743 "reset": true, 00:09:23.743 "compare": false, 00:09:23.743 "compare_and_write": false, 00:09:23.743 "abort": true, 00:09:23.743 "nvme_admin": false, 00:09:23.743 "nvme_io": false 00:09:23.743 }, 00:09:23.743 "memory_domains": [ 00:09:23.743 { 00:09:23.743 "dma_device_id": "system", 00:09:23.743 "dma_device_type": 1 00:09:23.743 }, 00:09:23.743 { 00:09:23.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.743 "dma_device_type": 2 00:09:23.743 } 00:09:23.743 ], 00:09:23.743 "driver_specific": {} 00:09:23.743 }' 00:09:23.743 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.743 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:23.743 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:23.743 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:23.743 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:24.001 [2024-06-10 10:13:29.569565] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.001 [2024-06-10 10:13:29.569595] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.001 [2024-06-10 10:13:29.569615] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.001 [2024-06-10 10:13:29.569639] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.001 [2024-06-10 10:13:29.569644] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bedfa00 name Existed_Raid, state offline 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 53447 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 53447 ']' 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 53447 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 53447 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:09:24.001 killing process with pid 53447 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 53447' 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 53447 00:09:24.001 [2024-06-10 10:13:29.598248] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.001 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 53447 00:09:24.259 [2024-06-10 10:13:29.612532] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.259 10:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:24.259 00:09:24.259 real 0m23.643s 00:09:24.259 user 0m42.732s 00:09:24.259 sys 0m3.776s 00:09:24.259 ************************************ 00:09:24.259 END TEST raid_state_function_test_sb 00:09:24.259 ************************************ 00:09:24.259 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:24.259 10:13:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.259 10:13:29 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:24.259 10:13:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:09:24.259 10:13:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:24.259 10:13:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.259 ************************************ 00:09:24.259 START TEST raid_superblock_test 00:09:24.259 ************************************ 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 3 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=54171 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 54171 /var/tmp/spdk-raid.sock 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 54171 ']' 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:24.259 10:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.259 [2024-06-10 10:13:29.839229] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:24.259 [2024-06-10 10:13:29.839399] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:24.825 EAL: TSC is not safe to use in SMP mode 00:09:24.825 EAL: TSC is not invariant 00:09:24.825 [2024-06-10 10:13:30.325952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.825 [2024-06-10 10:13:30.411404] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:24.825 [2024-06-10 10:13:30.413838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.825 [2024-06-10 10:13:30.414723] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.825 [2024-06-10 10:13:30.414741] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:25.390 10:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:25.648 malloc1 00:09:25.648 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.906 [2024-06-10 10:13:31.389356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.906 [2024-06-10 10:13:31.389418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.906 [2024-06-10 10:13:31.389431] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a0780 00:09:25.906 [2024-06-10 10:13:31.389439] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.906 [2024-06-10 10:13:31.390214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.906 [2024-06-10 10:13:31.390244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.906 pt1 00:09:25.906 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:25.906 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:25.906 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:25.906 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:25.906 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:25.906 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:25.907 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:25.907 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:25.907 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:26.165 malloc2 00:09:26.165 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.424 [2024-06-10 10:13:31.969389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.424 [2024-06-10 10:13:31.969453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.424 [2024-06-10 10:13:31.969465] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a0c80 00:09:26.424 [2024-06-10 10:13:31.969473] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.424 [2024-06-10 10:13:31.970005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.424 [2024-06-10 10:13:31.970032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.424 pt2 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:26.424 10:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:26.697 malloc3 00:09:26.956 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.956 [2024-06-10 10:13:32.513384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.956 [2024-06-10 10:13:32.513445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.956 [2024-06-10 10:13:32.513456] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a1180 00:09:26.956 [2024-06-10 10:13:32.513464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.956 [2024-06-10 10:13:32.513936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.956 [2024-06-10 10:13:32.513968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.956 pt3 00:09:26.956 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:26.956 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:26.956 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:27.524 [2024-06-10 10:13:32.821409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:27.524 [2024-06-10 10:13:32.821869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:27.524 [2024-06-10 10:13:32.821885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:27.524 [2024-06-10 10:13:32.821930] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d5a1400 00:09:27.524 [2024-06-10 10:13:32.821935] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.524 [2024-06-10 10:13:32.821966] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d603e20 00:09:27.524 [2024-06-10 10:13:32.822024] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d5a1400 00:09:27.524 [2024-06-10 10:13:32.822027] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d5a1400 00:09:27.524 [2024-06-10 10:13:32.822050] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.524 10:13:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.524 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.524 "name": "raid_bdev1", 00:09:27.524 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:27.524 "strip_size_kb": 64, 00:09:27.524 "state": "online", 00:09:27.524 "raid_level": "raid0", 00:09:27.524 "superblock": true, 00:09:27.524 "num_base_bdevs": 3, 00:09:27.524 "num_base_bdevs_discovered": 3, 00:09:27.524 "num_base_bdevs_operational": 3, 00:09:27.524 "base_bdevs_list": [ 00:09:27.524 { 00:09:27.524 "name": "pt1", 00:09:27.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:27.524 "is_configured": true, 00:09:27.524 "data_offset": 2048, 00:09:27.524 "data_size": 63488 00:09:27.524 }, 00:09:27.524 { 00:09:27.524 "name": "pt2", 00:09:27.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:27.524 "is_configured": true, 00:09:27.524 "data_offset": 2048, 00:09:27.524 "data_size": 63488 00:09:27.524 }, 00:09:27.524 { 00:09:27.524 "name": "pt3", 00:09:27.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:27.524 "is_configured": true, 00:09:27.524 "data_offset": 2048, 00:09:27.524 "data_size": 63488 00:09:27.524 } 00:09:27.524 ] 00:09:27.524 }' 00:09:27.524 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.524 10:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:28.090 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:28.349 [2024-06-10 10:13:33.721449] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:28.349 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:28.349 "name": "raid_bdev1", 00:09:28.349 "aliases": [ 00:09:28.349 "177503cd-2712-11ef-b084-113036b5c18d" 00:09:28.349 ], 00:09:28.349 "product_name": "Raid Volume", 00:09:28.349 "block_size": 512, 00:09:28.349 "num_blocks": 190464, 00:09:28.349 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:28.349 "assigned_rate_limits": { 00:09:28.349 "rw_ios_per_sec": 0, 00:09:28.349 "rw_mbytes_per_sec": 0, 00:09:28.349 "r_mbytes_per_sec": 0, 00:09:28.349 "w_mbytes_per_sec": 0 00:09:28.349 }, 00:09:28.349 "claimed": false, 00:09:28.349 "zoned": false, 00:09:28.349 "supported_io_types": { 00:09:28.349 "read": true, 00:09:28.349 "write": true, 00:09:28.349 "unmap": true, 00:09:28.349 "write_zeroes": true, 00:09:28.349 "flush": true, 00:09:28.349 "reset": true, 00:09:28.349 "compare": false, 00:09:28.349 "compare_and_write": false, 00:09:28.349 "abort": false, 00:09:28.349 "nvme_admin": false, 00:09:28.349 "nvme_io": false 00:09:28.349 }, 00:09:28.349 "memory_domains": [ 00:09:28.349 { 00:09:28.349 "dma_device_id": "system", 00:09:28.349 "dma_device_type": 1 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.349 "dma_device_type": 2 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "dma_device_id": "system", 00:09:28.349 "dma_device_type": 1 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.349 "dma_device_type": 2 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "dma_device_id": "system", 00:09:28.349 "dma_device_type": 1 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.349 "dma_device_type": 2 00:09:28.349 } 00:09:28.349 ], 00:09:28.349 "driver_specific": { 00:09:28.349 "raid": { 00:09:28.349 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:28.349 "strip_size_kb": 64, 00:09:28.349 "state": "online", 00:09:28.349 "raid_level": "raid0", 00:09:28.349 "superblock": true, 00:09:28.349 "num_base_bdevs": 3, 00:09:28.349 "num_base_bdevs_discovered": 3, 00:09:28.349 "num_base_bdevs_operational": 3, 00:09:28.349 "base_bdevs_list": [ 00:09:28.349 { 00:09:28.349 "name": "pt1", 00:09:28.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.349 "is_configured": true, 00:09:28.349 "data_offset": 2048, 00:09:28.349 "data_size": 63488 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "name": "pt2", 00:09:28.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.349 "is_configured": true, 00:09:28.349 "data_offset": 2048, 00:09:28.349 "data_size": 63488 00:09:28.349 }, 00:09:28.349 { 00:09:28.349 "name": "pt3", 00:09:28.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.349 "is_configured": true, 00:09:28.349 "data_offset": 2048, 00:09:28.349 "data_size": 63488 00:09:28.349 } 00:09:28.349 ] 00:09:28.349 } 00:09:28.349 } 00:09:28.349 }' 00:09:28.350 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:28.350 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:28.350 pt2 00:09:28.350 pt3' 00:09:28.350 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.350 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.350 10:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:28.608 "name": "pt1", 00:09:28.608 "aliases": [ 00:09:28.608 "00000000-0000-0000-0000-000000000001" 00:09:28.608 ], 00:09:28.608 "product_name": "passthru", 00:09:28.608 "block_size": 512, 00:09:28.608 "num_blocks": 65536, 00:09:28.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.608 "assigned_rate_limits": { 00:09:28.608 "rw_ios_per_sec": 0, 00:09:28.608 "rw_mbytes_per_sec": 0, 00:09:28.608 "r_mbytes_per_sec": 0, 00:09:28.608 "w_mbytes_per_sec": 0 00:09:28.608 }, 00:09:28.608 "claimed": true, 00:09:28.608 "claim_type": "exclusive_write", 00:09:28.608 "zoned": false, 00:09:28.608 "supported_io_types": { 00:09:28.608 "read": true, 00:09:28.608 "write": true, 00:09:28.608 "unmap": true, 00:09:28.608 "write_zeroes": true, 00:09:28.608 "flush": true, 00:09:28.608 "reset": true, 00:09:28.608 "compare": false, 00:09:28.608 "compare_and_write": false, 00:09:28.608 "abort": true, 00:09:28.608 "nvme_admin": false, 00:09:28.608 "nvme_io": false 00:09:28.608 }, 00:09:28.608 "memory_domains": [ 00:09:28.608 { 00:09:28.608 "dma_device_id": "system", 00:09:28.608 "dma_device_type": 1 00:09:28.608 }, 00:09:28.608 { 00:09:28.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.608 "dma_device_type": 2 00:09:28.608 } 00:09:28.608 ], 00:09:28.608 "driver_specific": { 00:09:28.608 "passthru": { 00:09:28.608 "name": "pt1", 00:09:28.608 "base_bdev_name": "malloc1" 00:09:28.608 } 00:09:28.608 } 00:09:28.608 }' 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:28.608 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:28.866 "name": "pt2", 00:09:28.866 "aliases": [ 00:09:28.866 "00000000-0000-0000-0000-000000000002" 00:09:28.866 ], 00:09:28.866 "product_name": "passthru", 00:09:28.866 "block_size": 512, 00:09:28.866 "num_blocks": 65536, 00:09:28.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.866 "assigned_rate_limits": { 00:09:28.866 "rw_ios_per_sec": 0, 00:09:28.866 "rw_mbytes_per_sec": 0, 00:09:28.866 "r_mbytes_per_sec": 0, 00:09:28.866 "w_mbytes_per_sec": 0 00:09:28.866 }, 00:09:28.866 "claimed": true, 00:09:28.866 "claim_type": "exclusive_write", 00:09:28.866 "zoned": false, 00:09:28.866 "supported_io_types": { 00:09:28.866 "read": true, 00:09:28.866 "write": true, 00:09:28.866 "unmap": true, 00:09:28.866 "write_zeroes": true, 00:09:28.866 "flush": true, 00:09:28.866 "reset": true, 00:09:28.866 "compare": false, 00:09:28.866 "compare_and_write": false, 00:09:28.866 "abort": true, 00:09:28.866 "nvme_admin": false, 00:09:28.866 "nvme_io": false 00:09:28.866 }, 00:09:28.866 "memory_domains": [ 00:09:28.866 { 00:09:28.866 "dma_device_id": "system", 00:09:28.866 "dma_device_type": 1 00:09:28.866 }, 00:09:28.866 { 00:09:28.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.866 "dma_device_type": 2 00:09:28.866 } 00:09:28.866 ], 00:09:28.866 "driver_specific": { 00:09:28.866 "passthru": { 00:09:28.866 "name": "pt2", 00:09:28.866 "base_bdev_name": "malloc2" 00:09:28.866 } 00:09:28.866 } 00:09:28.866 }' 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.866 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:29.433 "name": "pt3", 00:09:29.433 "aliases": [ 00:09:29.433 "00000000-0000-0000-0000-000000000003" 00:09:29.433 ], 00:09:29.433 "product_name": "passthru", 00:09:29.433 "block_size": 512, 00:09:29.433 "num_blocks": 65536, 00:09:29.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.433 "assigned_rate_limits": { 00:09:29.433 "rw_ios_per_sec": 0, 00:09:29.433 "rw_mbytes_per_sec": 0, 00:09:29.433 "r_mbytes_per_sec": 0, 00:09:29.433 "w_mbytes_per_sec": 0 00:09:29.433 }, 00:09:29.433 "claimed": true, 00:09:29.433 "claim_type": "exclusive_write", 00:09:29.433 "zoned": false, 00:09:29.433 "supported_io_types": { 00:09:29.433 "read": true, 00:09:29.433 "write": true, 00:09:29.433 "unmap": true, 00:09:29.433 "write_zeroes": true, 00:09:29.433 "flush": true, 00:09:29.433 "reset": true, 00:09:29.433 "compare": false, 00:09:29.433 "compare_and_write": false, 00:09:29.433 "abort": true, 00:09:29.433 "nvme_admin": false, 00:09:29.433 "nvme_io": false 00:09:29.433 }, 00:09:29.433 "memory_domains": [ 00:09:29.433 { 00:09:29.433 "dma_device_id": "system", 00:09:29.433 "dma_device_type": 1 00:09:29.433 }, 00:09:29.433 { 00:09:29.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.433 "dma_device_type": 2 00:09:29.433 } 00:09:29.433 ], 00:09:29.433 "driver_specific": { 00:09:29.433 "passthru": { 00:09:29.433 "name": "pt3", 00:09:29.433 "base_bdev_name": "malloc3" 00:09:29.433 } 00:09:29.433 } 00:09:29.433 }' 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:29.433 10:13:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:29.691 [2024-06-10 10:13:35.081500] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.691 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=177503cd-2712-11ef-b084-113036b5c18d 00:09:29.691 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 177503cd-2712-11ef-b084-113036b5c18d ']' 00:09:29.691 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:29.949 [2024-06-10 10:13:35.381457] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.949 [2024-06-10 10:13:35.381482] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.949 [2024-06-10 10:13:35.381502] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.949 [2024-06-10 10:13:35.381515] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.950 [2024-06-10 10:13:35.381519] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5a1400 name raid_bdev1, state offline 00:09:29.950 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:29.950 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.207 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:30.207 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:30.207 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.207 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:30.502 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.502 10:13:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:30.761 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:30.761 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:31.019 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:31.019 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:31.277 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:31.535 [2024-06-10 10:13:36.889521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:31.535 [2024-06-10 10:13:36.889995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:31.535 [2024-06-10 10:13:36.890008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:31.535 [2024-06-10 10:13:36.890021] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:31.535 [2024-06-10 10:13:36.890057] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:31.535 [2024-06-10 10:13:36.890067] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:31.535 [2024-06-10 10:13:36.890075] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.535 [2024-06-10 10:13:36.890079] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5a1180 name raid_bdev1, state configuring 00:09:31.535 request: 00:09:31.535 { 00:09:31.535 "name": "raid_bdev1", 00:09:31.535 "raid_level": "raid0", 00:09:31.535 "base_bdevs": [ 00:09:31.535 "malloc1", 00:09:31.535 "malloc2", 00:09:31.535 "malloc3" 00:09:31.535 ], 00:09:31.535 "superblock": false, 00:09:31.535 "strip_size_kb": 64, 00:09:31.535 "method": "bdev_raid_create", 00:09:31.535 "req_id": 1 00:09:31.535 } 00:09:31.535 Got JSON-RPC error response 00:09:31.535 response: 00:09:31.535 { 00:09:31.535 "code": -17, 00:09:31.535 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:31.535 } 00:09:31.535 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:09:31.535 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:09:31.535 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:09:31.535 10:13:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:09:31.535 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.535 10:13:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:31.794 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:31.794 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:31.794 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:32.053 [2024-06-10 10:13:37.429530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:32.053 [2024-06-10 10:13:37.429586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.053 [2024-06-10 10:13:37.429599] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a0c80 00:09:32.053 [2024-06-10 10:13:37.429607] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.053 [2024-06-10 10:13:37.430117] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.053 [2024-06-10 10:13:37.430145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:32.053 [2024-06-10 10:13:37.430167] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:32.053 [2024-06-10 10:13:37.430178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:32.053 pt1 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.053 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.311 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:32.311 "name": "raid_bdev1", 00:09:32.311 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:32.311 "strip_size_kb": 64, 00:09:32.311 "state": "configuring", 00:09:32.311 "raid_level": "raid0", 00:09:32.311 "superblock": true, 00:09:32.311 "num_base_bdevs": 3, 00:09:32.311 "num_base_bdevs_discovered": 1, 00:09:32.311 "num_base_bdevs_operational": 3, 00:09:32.311 "base_bdevs_list": [ 00:09:32.311 { 00:09:32.311 "name": "pt1", 00:09:32.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.311 "is_configured": true, 00:09:32.311 "data_offset": 2048, 00:09:32.311 "data_size": 63488 00:09:32.311 }, 00:09:32.311 { 00:09:32.311 "name": null, 00:09:32.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.311 "is_configured": false, 00:09:32.311 "data_offset": 2048, 00:09:32.311 "data_size": 63488 00:09:32.311 }, 00:09:32.311 { 00:09:32.311 "name": null, 00:09:32.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.311 "is_configured": false, 00:09:32.311 "data_offset": 2048, 00:09:32.311 "data_size": 63488 00:09:32.311 } 00:09:32.311 ] 00:09:32.311 }' 00:09:32.311 10:13:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:32.311 10:13:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.569 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:09:32.569 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.827 [2024-06-10 10:13:38.353557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.827 [2024-06-10 10:13:38.353624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.827 [2024-06-10 10:13:38.353635] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a1680 00:09:32.828 [2024-06-10 10:13:38.353642] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.828 [2024-06-10 10:13:38.353734] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.828 [2024-06-10 10:13:38.353743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.828 [2024-06-10 10:13:38.353761] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:32.828 [2024-06-10 10:13:38.353768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.828 pt2 00:09:32.828 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:33.086 [2024-06-10 10:13:38.629548] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.086 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.367 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:33.367 "name": "raid_bdev1", 00:09:33.367 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:33.367 "strip_size_kb": 64, 00:09:33.367 "state": "configuring", 00:09:33.367 "raid_level": "raid0", 00:09:33.367 "superblock": true, 00:09:33.367 "num_base_bdevs": 3, 00:09:33.367 "num_base_bdevs_discovered": 1, 00:09:33.367 "num_base_bdevs_operational": 3, 00:09:33.367 "base_bdevs_list": [ 00:09:33.367 { 00:09:33.367 "name": "pt1", 00:09:33.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.367 "is_configured": true, 00:09:33.367 "data_offset": 2048, 00:09:33.367 "data_size": 63488 00:09:33.367 }, 00:09:33.367 { 00:09:33.367 "name": null, 00:09:33.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.367 "is_configured": false, 00:09:33.367 "data_offset": 2048, 00:09:33.367 "data_size": 63488 00:09:33.367 }, 00:09:33.367 { 00:09:33.367 "name": null, 00:09:33.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.367 "is_configured": false, 00:09:33.367 "data_offset": 2048, 00:09:33.367 "data_size": 63488 00:09:33.367 } 00:09:33.367 ] 00:09:33.367 }' 00:09:33.367 10:13:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:33.367 10:13:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.953 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:33.953 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:33.953 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.953 [2024-06-10 10:13:39.465590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.953 [2024-06-10 10:13:39.465646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.953 [2024-06-10 10:13:39.465658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a1680 00:09:33.953 [2024-06-10 10:13:39.465665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.953 [2024-06-10 10:13:39.465758] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.953 [2024-06-10 10:13:39.465767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.953 [2024-06-10 10:13:39.465786] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.953 [2024-06-10 10:13:39.465793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.953 pt2 00:09:33.953 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:33.953 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:33.953 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:34.212 [2024-06-10 10:13:39.673620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:34.212 [2024-06-10 10:13:39.673658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.212 [2024-06-10 10:13:39.673667] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d5a1400 00:09:34.212 [2024-06-10 10:13:39.673674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.212 [2024-06-10 10:13:39.673733] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.212 [2024-06-10 10:13:39.673741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:34.212 [2024-06-10 10:13:39.673755] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:34.212 [2024-06-10 10:13:39.673770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:34.212 [2024-06-10 10:13:39.673789] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d5a0780 00:09:34.212 [2024-06-10 10:13:39.673792] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.212 [2024-06-10 10:13:39.673811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d603e20 00:09:34.212 [2024-06-10 10:13:39.673851] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d5a0780 00:09:34.212 [2024-06-10 10:13:39.673854] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d5a0780 00:09:34.212 [2024-06-10 10:13:39.673870] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.212 pt3 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.212 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.470 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.470 "name": "raid_bdev1", 00:09:34.470 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:34.470 "strip_size_kb": 64, 00:09:34.470 "state": "online", 00:09:34.470 "raid_level": "raid0", 00:09:34.470 "superblock": true, 00:09:34.470 "num_base_bdevs": 3, 00:09:34.470 "num_base_bdevs_discovered": 3, 00:09:34.470 "num_base_bdevs_operational": 3, 00:09:34.470 "base_bdevs_list": [ 00:09:34.470 { 00:09:34.470 "name": "pt1", 00:09:34.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.470 "is_configured": true, 00:09:34.470 "data_offset": 2048, 00:09:34.470 "data_size": 63488 00:09:34.470 }, 00:09:34.470 { 00:09:34.470 "name": "pt2", 00:09:34.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.470 "is_configured": true, 00:09:34.470 "data_offset": 2048, 00:09:34.470 "data_size": 63488 00:09:34.470 }, 00:09:34.470 { 00:09:34.470 "name": "pt3", 00:09:34.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.470 "is_configured": true, 00:09:34.470 "data_offset": 2048, 00:09:34.470 "data_size": 63488 00:09:34.470 } 00:09:34.470 ] 00:09:34.470 }' 00:09:34.470 10:13:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.470 10:13:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:35.038 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:35.038 [2024-06-10 10:13:40.629748] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.298 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:35.298 "name": "raid_bdev1", 00:09:35.298 "aliases": [ 00:09:35.298 "177503cd-2712-11ef-b084-113036b5c18d" 00:09:35.298 ], 00:09:35.298 "product_name": "Raid Volume", 00:09:35.298 "block_size": 512, 00:09:35.298 "num_blocks": 190464, 00:09:35.298 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:35.298 "assigned_rate_limits": { 00:09:35.298 "rw_ios_per_sec": 0, 00:09:35.298 "rw_mbytes_per_sec": 0, 00:09:35.298 "r_mbytes_per_sec": 0, 00:09:35.298 "w_mbytes_per_sec": 0 00:09:35.298 }, 00:09:35.298 "claimed": false, 00:09:35.298 "zoned": false, 00:09:35.298 "supported_io_types": { 00:09:35.298 "read": true, 00:09:35.298 "write": true, 00:09:35.298 "unmap": true, 00:09:35.298 "write_zeroes": true, 00:09:35.298 "flush": true, 00:09:35.298 "reset": true, 00:09:35.298 "compare": false, 00:09:35.298 "compare_and_write": false, 00:09:35.298 "abort": false, 00:09:35.298 "nvme_admin": false, 00:09:35.298 "nvme_io": false 00:09:35.298 }, 00:09:35.298 "memory_domains": [ 00:09:35.298 { 00:09:35.298 "dma_device_id": "system", 00:09:35.298 "dma_device_type": 1 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.298 "dma_device_type": 2 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "dma_device_id": "system", 00:09:35.298 "dma_device_type": 1 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.298 "dma_device_type": 2 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "dma_device_id": "system", 00:09:35.298 "dma_device_type": 1 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.298 "dma_device_type": 2 00:09:35.298 } 00:09:35.298 ], 00:09:35.298 "driver_specific": { 00:09:35.298 "raid": { 00:09:35.298 "uuid": "177503cd-2712-11ef-b084-113036b5c18d", 00:09:35.298 "strip_size_kb": 64, 00:09:35.298 "state": "online", 00:09:35.298 "raid_level": "raid0", 00:09:35.298 "superblock": true, 00:09:35.298 "num_base_bdevs": 3, 00:09:35.298 "num_base_bdevs_discovered": 3, 00:09:35.298 "num_base_bdevs_operational": 3, 00:09:35.298 "base_bdevs_list": [ 00:09:35.298 { 00:09:35.298 "name": "pt1", 00:09:35.298 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.298 "is_configured": true, 00:09:35.298 "data_offset": 2048, 00:09:35.298 "data_size": 63488 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "name": "pt2", 00:09:35.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.298 "is_configured": true, 00:09:35.298 "data_offset": 2048, 00:09:35.298 "data_size": 63488 00:09:35.298 }, 00:09:35.298 { 00:09:35.298 "name": "pt3", 00:09:35.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.298 "is_configured": true, 00:09:35.298 "data_offset": 2048, 00:09:35.298 "data_size": 63488 00:09:35.298 } 00:09:35.298 ] 00:09:35.298 } 00:09:35.298 } 00:09:35.298 }' 00:09:35.298 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.298 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:35.298 pt2 00:09:35.298 pt3' 00:09:35.298 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:35.298 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:35.298 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:35.557 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:35.557 "name": "pt1", 00:09:35.557 "aliases": [ 00:09:35.557 "00000000-0000-0000-0000-000000000001" 00:09:35.557 ], 00:09:35.557 "product_name": "passthru", 00:09:35.557 "block_size": 512, 00:09:35.557 "num_blocks": 65536, 00:09:35.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.557 "assigned_rate_limits": { 00:09:35.557 "rw_ios_per_sec": 0, 00:09:35.557 "rw_mbytes_per_sec": 0, 00:09:35.557 "r_mbytes_per_sec": 0, 00:09:35.557 "w_mbytes_per_sec": 0 00:09:35.557 }, 00:09:35.557 "claimed": true, 00:09:35.557 "claim_type": "exclusive_write", 00:09:35.557 "zoned": false, 00:09:35.557 "supported_io_types": { 00:09:35.557 "read": true, 00:09:35.557 "write": true, 00:09:35.557 "unmap": true, 00:09:35.558 "write_zeroes": true, 00:09:35.558 "flush": true, 00:09:35.558 "reset": true, 00:09:35.558 "compare": false, 00:09:35.558 "compare_and_write": false, 00:09:35.558 "abort": true, 00:09:35.558 "nvme_admin": false, 00:09:35.558 "nvme_io": false 00:09:35.558 }, 00:09:35.558 "memory_domains": [ 00:09:35.558 { 00:09:35.558 "dma_device_id": "system", 00:09:35.558 "dma_device_type": 1 00:09:35.558 }, 00:09:35.558 { 00:09:35.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.558 "dma_device_type": 2 00:09:35.558 } 00:09:35.558 ], 00:09:35.558 "driver_specific": { 00:09:35.558 "passthru": { 00:09:35.558 "name": "pt1", 00:09:35.558 "base_bdev_name": "malloc1" 00:09:35.558 } 00:09:35.558 } 00:09:35.558 }' 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:35.558 10:13:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.558 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.558 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:35.558 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:35.558 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:35.558 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:35.816 "name": "pt2", 00:09:35.816 "aliases": [ 00:09:35.816 "00000000-0000-0000-0000-000000000002" 00:09:35.816 ], 00:09:35.816 "product_name": "passthru", 00:09:35.816 "block_size": 512, 00:09:35.816 "num_blocks": 65536, 00:09:35.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.816 "assigned_rate_limits": { 00:09:35.816 "rw_ios_per_sec": 0, 00:09:35.816 "rw_mbytes_per_sec": 0, 00:09:35.816 "r_mbytes_per_sec": 0, 00:09:35.816 "w_mbytes_per_sec": 0 00:09:35.816 }, 00:09:35.816 "claimed": true, 00:09:35.816 "claim_type": "exclusive_write", 00:09:35.816 "zoned": false, 00:09:35.816 "supported_io_types": { 00:09:35.816 "read": true, 00:09:35.816 "write": true, 00:09:35.816 "unmap": true, 00:09:35.816 "write_zeroes": true, 00:09:35.816 "flush": true, 00:09:35.816 "reset": true, 00:09:35.816 "compare": false, 00:09:35.816 "compare_and_write": false, 00:09:35.816 "abort": true, 00:09:35.816 "nvme_admin": false, 00:09:35.816 "nvme_io": false 00:09:35.816 }, 00:09:35.816 "memory_domains": [ 00:09:35.816 { 00:09:35.816 "dma_device_id": "system", 00:09:35.816 "dma_device_type": 1 00:09:35.816 }, 00:09:35.816 { 00:09:35.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.816 "dma_device_type": 2 00:09:35.816 } 00:09:35.816 ], 00:09:35.816 "driver_specific": { 00:09:35.816 "passthru": { 00:09:35.816 "name": "pt2", 00:09:35.816 "base_bdev_name": "malloc2" 00:09:35.816 } 00:09:35.816 } 00:09:35.816 }' 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:35.816 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:36.075 "name": "pt3", 00:09:36.075 "aliases": [ 00:09:36.075 "00000000-0000-0000-0000-000000000003" 00:09:36.075 ], 00:09:36.075 "product_name": "passthru", 00:09:36.075 "block_size": 512, 00:09:36.075 "num_blocks": 65536, 00:09:36.075 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:36.075 "assigned_rate_limits": { 00:09:36.075 "rw_ios_per_sec": 0, 00:09:36.075 "rw_mbytes_per_sec": 0, 00:09:36.075 "r_mbytes_per_sec": 0, 00:09:36.075 "w_mbytes_per_sec": 0 00:09:36.075 }, 00:09:36.075 "claimed": true, 00:09:36.075 "claim_type": "exclusive_write", 00:09:36.075 "zoned": false, 00:09:36.075 "supported_io_types": { 00:09:36.075 "read": true, 00:09:36.075 "write": true, 00:09:36.075 "unmap": true, 00:09:36.075 "write_zeroes": true, 00:09:36.075 "flush": true, 00:09:36.075 "reset": true, 00:09:36.075 "compare": false, 00:09:36.075 "compare_and_write": false, 00:09:36.075 "abort": true, 00:09:36.075 "nvme_admin": false, 00:09:36.075 "nvme_io": false 00:09:36.075 }, 00:09:36.075 "memory_domains": [ 00:09:36.075 { 00:09:36.075 "dma_device_id": "system", 00:09:36.075 "dma_device_type": 1 00:09:36.075 }, 00:09:36.075 { 00:09:36.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.075 "dma_device_type": 2 00:09:36.075 } 00:09:36.075 ], 00:09:36.075 "driver_specific": { 00:09:36.075 "passthru": { 00:09:36.075 "name": "pt3", 00:09:36.075 "base_bdev_name": "malloc3" 00:09:36.075 } 00:09:36.075 } 00:09:36.075 }' 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:36.075 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:36.334 [2024-06-10 10:13:41.829785] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 177503cd-2712-11ef-b084-113036b5c18d '!=' 177503cd-2712-11ef-b084-113036b5c18d ']' 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 54171 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 54171 ']' 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 54171 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 54171 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:09:36.334 killing process with pid 54171 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 54171' 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 54171 00:09:36.334 [2024-06-10 10:13:41.860361] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.334 [2024-06-10 10:13:41.860390] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.334 [2024-06-10 10:13:41.860404] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.334 [2024-06-10 10:13:41.860408] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d5a0780 name raid_bdev1, state offline 00:09:36.334 10:13:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 54171 00:09:36.334 [2024-06-10 10:13:41.874976] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.598 10:13:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:36.598 00:09:36.598 real 0m12.214s 00:09:36.598 user 0m21.814s 00:09:36.598 sys 0m1.875s 00:09:36.598 10:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:36.598 10:13:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.598 ************************************ 00:09:36.598 END TEST raid_superblock_test 00:09:36.598 ************************************ 00:09:36.598 10:13:42 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:36.598 10:13:42 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:09:36.598 10:13:42 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:36.598 10:13:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.598 ************************************ 00:09:36.598 START TEST raid_read_error_test 00:09:36.598 ************************************ 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 3 read 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.WGJH4IdH 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=54526 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 54526 /var/tmp/spdk-raid.sock 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 54526 ']' 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:36.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:36.598 10:13:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.598 [2024-06-10 10:13:42.102296] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:36.598 [2024-06-10 10:13:42.102574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:37.168 EAL: TSC is not safe to use in SMP mode 00:09:37.168 EAL: TSC is not invariant 00:09:37.168 [2024-06-10 10:13:42.609037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.168 [2024-06-10 10:13:42.702698] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:37.168 [2024-06-10 10:13:42.705399] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.168 [2024-06-10 10:13:42.706284] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.168 [2024-06-10 10:13:42.706300] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.762 10:13:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:37.762 10:13:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:09:37.762 10:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:37.762 10:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.033 BaseBdev1_malloc 00:09:38.033 10:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:38.292 true 00:09:38.292 10:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.292 [2024-06-10 10:13:43.866297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.292 [2024-06-10 10:13:43.866367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.292 [2024-06-10 10:13:43.866394] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd8f780 00:09:38.292 [2024-06-10 10:13:43.866400] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.292 [2024-06-10 10:13:43.866924] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.292 [2024-06-10 10:13:43.866955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.292 BaseBdev1 00:09:38.292 10:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:38.292 10:13:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.551 BaseBdev2_malloc 00:09:38.809 10:13:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:39.067 true 00:09:39.067 10:13:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:39.326 [2024-06-10 10:13:44.686324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:39.326 [2024-06-10 10:13:44.686380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.326 [2024-06-10 10:13:44.686409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd8fc80 00:09:39.326 [2024-06-10 10:13:44.686417] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.326 [2024-06-10 10:13:44.686974] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.326 [2024-06-10 10:13:44.687007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:39.326 BaseBdev2 00:09:39.326 10:13:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:39.326 10:13:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:39.326 BaseBdev3_malloc 00:09:39.326 10:13:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:39.583 true 00:09:39.583 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:39.841 [2024-06-10 10:13:45.342329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:39.841 [2024-06-10 10:13:45.342383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.841 [2024-06-10 10:13:45.342409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd90180 00:09:39.841 [2024-06-10 10:13:45.342416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.841 [2024-06-10 10:13:45.342906] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.841 [2024-06-10 10:13:45.342935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:39.841 BaseBdev3 00:09:39.841 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:40.099 [2024-06-10 10:13:45.570362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.099 [2024-06-10 10:13:45.570818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.099 [2024-06-10 10:13:45.570844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.099 [2024-06-10 10:13:45.570893] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd90400 00:09:40.099 [2024-06-10 10:13:45.570899] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.099 [2024-06-10 10:13:45.570933] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cdfbe20 00:09:40.099 [2024-06-10 10:13:45.570987] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd90400 00:09:40.099 [2024-06-10 10:13:45.570991] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd90400 00:09:40.099 [2024-06-10 10:13:45.571012] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:40.099 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:40.100 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:40.100 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:40.100 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:40.100 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.100 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.358 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.358 "name": "raid_bdev1", 00:09:40.358 "uuid": "1f0e59e7-2712-11ef-b084-113036b5c18d", 00:09:40.358 "strip_size_kb": 64, 00:09:40.358 "state": "online", 00:09:40.358 "raid_level": "raid0", 00:09:40.358 "superblock": true, 00:09:40.358 "num_base_bdevs": 3, 00:09:40.358 "num_base_bdevs_discovered": 3, 00:09:40.358 "num_base_bdevs_operational": 3, 00:09:40.358 "base_bdevs_list": [ 00:09:40.358 { 00:09:40.358 "name": "BaseBdev1", 00:09:40.358 "uuid": "7bbce40e-5780-bc5d-973e-9514cf7217cb", 00:09:40.358 "is_configured": true, 00:09:40.358 "data_offset": 2048, 00:09:40.358 "data_size": 63488 00:09:40.358 }, 00:09:40.358 { 00:09:40.358 "name": "BaseBdev2", 00:09:40.358 "uuid": "afb7a738-4913-265b-8660-dce5e9003cab", 00:09:40.358 "is_configured": true, 00:09:40.358 "data_offset": 2048, 00:09:40.358 "data_size": 63488 00:09:40.358 }, 00:09:40.358 { 00:09:40.358 "name": "BaseBdev3", 00:09:40.358 "uuid": "1cd76f62-1bd9-9e50-b4ec-32a6e82bc701", 00:09:40.358 "is_configured": true, 00:09:40.358 "data_offset": 2048, 00:09:40.358 "data_size": 63488 00:09:40.358 } 00:09:40.358 ] 00:09:40.358 }' 00:09:40.358 10:13:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.358 10:13:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.923 10:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:40.923 10:13:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:40.923 [2024-06-10 10:13:46.406438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cdfbec0 00:09:41.857 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:42.114 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:42.114 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.115 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.372 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.372 "name": "raid_bdev1", 00:09:42.372 "uuid": "1f0e59e7-2712-11ef-b084-113036b5c18d", 00:09:42.372 "strip_size_kb": 64, 00:09:42.372 "state": "online", 00:09:42.372 "raid_level": "raid0", 00:09:42.372 "superblock": true, 00:09:42.372 "num_base_bdevs": 3, 00:09:42.372 "num_base_bdevs_discovered": 3, 00:09:42.372 "num_base_bdevs_operational": 3, 00:09:42.372 "base_bdevs_list": [ 00:09:42.372 { 00:09:42.372 "name": "BaseBdev1", 00:09:42.372 "uuid": "7bbce40e-5780-bc5d-973e-9514cf7217cb", 00:09:42.372 "is_configured": true, 00:09:42.372 "data_offset": 2048, 00:09:42.372 "data_size": 63488 00:09:42.372 }, 00:09:42.372 { 00:09:42.372 "name": "BaseBdev2", 00:09:42.372 "uuid": "afb7a738-4913-265b-8660-dce5e9003cab", 00:09:42.372 "is_configured": true, 00:09:42.372 "data_offset": 2048, 00:09:42.372 "data_size": 63488 00:09:42.372 }, 00:09:42.372 { 00:09:42.372 "name": "BaseBdev3", 00:09:42.372 "uuid": "1cd76f62-1bd9-9e50-b4ec-32a6e82bc701", 00:09:42.372 "is_configured": true, 00:09:42.372 "data_offset": 2048, 00:09:42.372 "data_size": 63488 00:09:42.372 } 00:09:42.372 ] 00:09:42.372 }' 00:09:42.372 10:13:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.372 10:13:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:42.936 [2024-06-10 10:13:48.488186] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.936 [2024-06-10 10:13:48.488220] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.936 [2024-06-10 10:13:48.488593] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.936 [2024-06-10 10:13:48.488604] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.936 [2024-06-10 10:13:48.488612] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.936 [2024-06-10 10:13:48.488617] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd90400 name raid_bdev1, state offline 00:09:42.936 0 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 54526 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 54526 ']' 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 54526 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 54526 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:09:42.936 killing process with pid 54526 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 54526' 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 54526 00:09:42.936 [2024-06-10 10:13:48.515961] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.936 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 54526 00:09:42.936 [2024-06-10 10:13:48.530483] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.WGJH4IdH 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:09:43.194 00:09:43.194 real 0m6.625s 00:09:43.194 user 0m10.459s 00:09:43.194 sys 0m1.045s 00:09:43.194 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:43.195 10:13:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.195 ************************************ 00:09:43.195 END TEST raid_read_error_test 00:09:43.195 ************************************ 00:09:43.195 10:13:48 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:43.195 10:13:48 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:09:43.195 10:13:48 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:43.195 10:13:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.195 ************************************ 00:09:43.195 START TEST raid_write_error_test 00:09:43.195 ************************************ 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 3 write 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.gXgWrayN 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=54657 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 54657 /var/tmp/spdk-raid.sock 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 54657 ']' 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:43.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:43.195 10:13:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.195 [2024-06-10 10:13:48.766551] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:43.195 [2024-06-10 10:13:48.766721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:43.761 EAL: TSC is not safe to use in SMP mode 00:09:43.761 EAL: TSC is not invariant 00:09:43.761 [2024-06-10 10:13:49.221580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.761 [2024-06-10 10:13:49.302801] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:43.761 [2024-06-10 10:13:49.305009] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.761 [2024-06-10 10:13:49.305792] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.761 [2024-06-10 10:13:49.305808] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.328 10:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:44.328 10:13:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:09:44.328 10:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:44.328 10:13:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:44.893 BaseBdev1_malloc 00:09:44.893 10:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:44.893 true 00:09:44.893 10:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.151 [2024-06-10 10:13:50.716728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.151 [2024-06-10 10:13:50.716784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.151 [2024-06-10 10:13:50.716809] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2ca780 00:09:45.151 [2024-06-10 10:13:50.716817] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.151 [2024-06-10 10:13:50.717344] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.151 [2024-06-10 10:13:50.717376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.151 BaseBdev1 00:09:45.151 10:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:45.151 10:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.410 BaseBdev2_malloc 00:09:45.410 10:13:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:45.669 true 00:09:45.669 10:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.927 [2024-06-10 10:13:51.376744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.927 [2024-06-10 10:13:51.376797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.927 [2024-06-10 10:13:51.376821] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2cac80 00:09:45.927 [2024-06-10 10:13:51.376830] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.927 [2024-06-10 10:13:51.377364] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.927 [2024-06-10 10:13:51.377412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.927 BaseBdev2 00:09:45.928 10:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:45.928 10:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.185 BaseBdev3_malloc 00:09:46.185 10:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:46.443 true 00:09:46.443 10:13:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.716 [2024-06-10 10:13:52.140784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.716 [2024-06-10 10:13:52.140873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.716 [2024-06-10 10:13:52.140902] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a2cb180 00:09:46.716 [2024-06-10 10:13:52.140910] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.716 [2024-06-10 10:13:52.141485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.716 [2024-06-10 10:13:52.141541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.716 BaseBdev3 00:09:46.716 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:46.980 [2024-06-10 10:13:52.436816] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.980 [2024-06-10 10:13:52.437297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.980 [2024-06-10 10:13:52.437322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.980 [2024-06-10 10:13:52.437375] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a2cb400 00:09:46.980 [2024-06-10 10:13:52.437380] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:46.980 [2024-06-10 10:13:52.437417] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a336e20 00:09:46.980 [2024-06-10 10:13:52.437474] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a2cb400 00:09:46.980 [2024-06-10 10:13:52.437478] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a2cb400 00:09:46.980 [2024-06-10 10:13:52.437503] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.980 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.238 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.238 "name": "raid_bdev1", 00:09:47.238 "uuid": "2326169b-2712-11ef-b084-113036b5c18d", 00:09:47.238 "strip_size_kb": 64, 00:09:47.238 "state": "online", 00:09:47.238 "raid_level": "raid0", 00:09:47.238 "superblock": true, 00:09:47.238 "num_base_bdevs": 3, 00:09:47.238 "num_base_bdevs_discovered": 3, 00:09:47.238 "num_base_bdevs_operational": 3, 00:09:47.238 "base_bdevs_list": [ 00:09:47.238 { 00:09:47.238 "name": "BaseBdev1", 00:09:47.238 "uuid": "d639c353-8b7a-c359-a9d9-1fbf632bbd1e", 00:09:47.238 "is_configured": true, 00:09:47.238 "data_offset": 2048, 00:09:47.238 "data_size": 63488 00:09:47.238 }, 00:09:47.238 { 00:09:47.238 "name": "BaseBdev2", 00:09:47.238 "uuid": "a7cf1dc6-cb44-035f-a08a-808d95423aca", 00:09:47.238 "is_configured": true, 00:09:47.238 "data_offset": 2048, 00:09:47.238 "data_size": 63488 00:09:47.238 }, 00:09:47.238 { 00:09:47.238 "name": "BaseBdev3", 00:09:47.238 "uuid": "aff44341-a3ea-ac5e-a770-0d822f165c00", 00:09:47.238 "is_configured": true, 00:09:47.238 "data_offset": 2048, 00:09:47.238 "data_size": 63488 00:09:47.238 } 00:09:47.238 ] 00:09:47.238 }' 00:09:47.238 10:13:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.238 10:13:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.496 10:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:47.496 10:13:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:47.754 [2024-06-10 10:13:53.136895] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a336ec0 00:09:48.690 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.948 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.207 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:49.207 "name": "raid_bdev1", 00:09:49.207 "uuid": "2326169b-2712-11ef-b084-113036b5c18d", 00:09:49.207 "strip_size_kb": 64, 00:09:49.207 "state": "online", 00:09:49.207 "raid_level": "raid0", 00:09:49.207 "superblock": true, 00:09:49.207 "num_base_bdevs": 3, 00:09:49.207 "num_base_bdevs_discovered": 3, 00:09:49.207 "num_base_bdevs_operational": 3, 00:09:49.207 "base_bdevs_list": [ 00:09:49.207 { 00:09:49.207 "name": "BaseBdev1", 00:09:49.207 "uuid": "d639c353-8b7a-c359-a9d9-1fbf632bbd1e", 00:09:49.207 "is_configured": true, 00:09:49.207 "data_offset": 2048, 00:09:49.207 "data_size": 63488 00:09:49.207 }, 00:09:49.207 { 00:09:49.207 "name": "BaseBdev2", 00:09:49.207 "uuid": "a7cf1dc6-cb44-035f-a08a-808d95423aca", 00:09:49.207 "is_configured": true, 00:09:49.207 "data_offset": 2048, 00:09:49.207 "data_size": 63488 00:09:49.207 }, 00:09:49.207 { 00:09:49.207 "name": "BaseBdev3", 00:09:49.207 "uuid": "aff44341-a3ea-ac5e-a770-0d822f165c00", 00:09:49.207 "is_configured": true, 00:09:49.207 "data_offset": 2048, 00:09:49.207 "data_size": 63488 00:09:49.207 } 00:09:49.207 ] 00:09:49.207 }' 00:09:49.207 10:13:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:49.207 10:13:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.773 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:49.773 [2024-06-10 10:13:55.354808] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.773 [2024-06-10 10:13:55.354844] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.773 [2024-06-10 10:13:55.355206] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.773 [2024-06-10 10:13:55.355217] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.773 [2024-06-10 10:13:55.355225] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.773 [2024-06-10 10:13:55.355230] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a2cb400 name raid_bdev1, state offline 00:09:49.773 0 00:09:49.773 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 54657 00:09:49.773 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 54657 ']' 00:09:49.773 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 54657 00:09:49.773 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 54657 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 54657' 00:09:50.032 killing process with pid 54657 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 54657 00:09:50.032 [2024-06-10 10:13:55.383024] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 54657 00:09:50.032 [2024-06-10 10:13:55.397597] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.gXgWrayN 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:09:50.032 00:09:50.032 real 0m6.828s 00:09:50.032 user 0m10.826s 00:09:50.032 sys 0m1.123s 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:50.032 10:13:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 ************************************ 00:09:50.032 END TEST raid_write_error_test 00:09:50.032 ************************************ 00:09:50.032 10:13:55 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:50.032 10:13:55 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:50.032 10:13:55 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:09:50.032 10:13:55 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:09:50.032 10:13:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 ************************************ 00:09:50.032 START TEST raid_state_function_test 00:09:50.032 ************************************ 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 3 false 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=54790 00:09:50.032 Process raid pid: 54790 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54790' 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 54790 /var/tmp/spdk-raid.sock 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 54790 ']' 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:09:50.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:09:50.032 10:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 [2024-06-10 10:13:55.632079] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:09:50.032 [2024-06-10 10:13:55.632317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:50.597 EAL: TSC is not safe to use in SMP mode 00:09:50.597 EAL: TSC is not invariant 00:09:50.597 [2024-06-10 10:13:56.136887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.853 [2024-06-10 10:13:56.226414] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:50.853 [2024-06-10 10:13:56.229132] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.853 [2024-06-10 10:13:56.229922] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.853 [2024-06-10 10:13:56.229936] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.418 10:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:09:51.418 10:13:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:09:51.418 10:13:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:51.676 [2024-06-10 10:13:57.053194] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.676 [2024-06-10 10:13:57.053267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.676 [2024-06-10 10:13:57.053272] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.676 [2024-06-10 10:13:57.053281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.676 [2024-06-10 10:13:57.053284] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.676 [2024-06-10 10:13:57.053292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.676 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.933 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.933 "name": "Existed_Raid", 00:09:51.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.933 "strip_size_kb": 64, 00:09:51.933 "state": "configuring", 00:09:51.933 "raid_level": "concat", 00:09:51.933 "superblock": false, 00:09:51.933 "num_base_bdevs": 3, 00:09:51.933 "num_base_bdevs_discovered": 0, 00:09:51.933 "num_base_bdevs_operational": 3, 00:09:51.933 "base_bdevs_list": [ 00:09:51.933 { 00:09:51.933 "name": "BaseBdev1", 00:09:51.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.933 "is_configured": false, 00:09:51.933 "data_offset": 0, 00:09:51.933 "data_size": 0 00:09:51.933 }, 00:09:51.933 { 00:09:51.933 "name": "BaseBdev2", 00:09:51.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.933 "is_configured": false, 00:09:51.933 "data_offset": 0, 00:09:51.933 "data_size": 0 00:09:51.933 }, 00:09:51.933 { 00:09:51.933 "name": "BaseBdev3", 00:09:51.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.933 "is_configured": false, 00:09:51.933 "data_offset": 0, 00:09:51.934 "data_size": 0 00:09:51.934 } 00:09:51.934 ] 00:09:51.934 }' 00:09:51.934 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.934 10:13:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.192 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:52.450 [2024-06-10 10:13:57.949187] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.450 [2024-06-10 10:13:57.949222] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abba500 name Existed_Raid, state configuring 00:09:52.450 10:13:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:52.708 [2024-06-10 10:13:58.209187] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.708 [2024-06-10 10:13:58.209239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.708 [2024-06-10 10:13:58.209244] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.708 [2024-06-10 10:13:58.209252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.708 [2024-06-10 10:13:58.209256] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.708 [2024-06-10 10:13:58.209263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.708 10:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.965 [2024-06-10 10:13:58.454164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.965 BaseBdev1 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:52.965 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:53.223 10:13:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.482 [ 00:09:53.482 { 00:09:53.482 "name": "BaseBdev1", 00:09:53.482 "aliases": [ 00:09:53.482 "26bc1e15-2712-11ef-b084-113036b5c18d" 00:09:53.482 ], 00:09:53.482 "product_name": "Malloc disk", 00:09:53.482 "block_size": 512, 00:09:53.482 "num_blocks": 65536, 00:09:53.482 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:53.482 "assigned_rate_limits": { 00:09:53.482 "rw_ios_per_sec": 0, 00:09:53.482 "rw_mbytes_per_sec": 0, 00:09:53.482 "r_mbytes_per_sec": 0, 00:09:53.482 "w_mbytes_per_sec": 0 00:09:53.482 }, 00:09:53.482 "claimed": true, 00:09:53.482 "claim_type": "exclusive_write", 00:09:53.482 "zoned": false, 00:09:53.482 "supported_io_types": { 00:09:53.482 "read": true, 00:09:53.482 "write": true, 00:09:53.482 "unmap": true, 00:09:53.482 "write_zeroes": true, 00:09:53.482 "flush": true, 00:09:53.482 "reset": true, 00:09:53.482 "compare": false, 00:09:53.482 "compare_and_write": false, 00:09:53.482 "abort": true, 00:09:53.482 "nvme_admin": false, 00:09:53.482 "nvme_io": false 00:09:53.482 }, 00:09:53.482 "memory_domains": [ 00:09:53.482 { 00:09:53.482 "dma_device_id": "system", 00:09:53.482 "dma_device_type": 1 00:09:53.482 }, 00:09:53.482 { 00:09:53.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.482 "dma_device_type": 2 00:09:53.482 } 00:09:53.482 ], 00:09:53.482 "driver_specific": {} 00:09:53.482 } 00:09:53.482 ] 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.482 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.796 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.796 "name": "Existed_Raid", 00:09:53.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.796 "strip_size_kb": 64, 00:09:53.796 "state": "configuring", 00:09:53.796 "raid_level": "concat", 00:09:53.796 "superblock": false, 00:09:53.796 "num_base_bdevs": 3, 00:09:53.796 "num_base_bdevs_discovered": 1, 00:09:53.796 "num_base_bdevs_operational": 3, 00:09:53.796 "base_bdevs_list": [ 00:09:53.796 { 00:09:53.796 "name": "BaseBdev1", 00:09:53.796 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:53.796 "is_configured": true, 00:09:53.796 "data_offset": 0, 00:09:53.796 "data_size": 65536 00:09:53.796 }, 00:09:53.796 { 00:09:53.796 "name": "BaseBdev2", 00:09:53.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.796 "is_configured": false, 00:09:53.796 "data_offset": 0, 00:09:53.796 "data_size": 0 00:09:53.796 }, 00:09:53.796 { 00:09:53.796 "name": "BaseBdev3", 00:09:53.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.797 "is_configured": false, 00:09:53.797 "data_offset": 0, 00:09:53.797 "data_size": 0 00:09:53.797 } 00:09:53.797 ] 00:09:53.797 }' 00:09:53.797 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.797 10:13:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.377 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:54.635 [2024-06-10 10:13:59.981195] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.635 [2024-06-10 10:13:59.981229] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abba500 name Existed_Raid, state configuring 00:09:54.635 10:13:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:54.636 [2024-06-10 10:14:00.197202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.636 [2024-06-10 10:14:00.197890] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.636 [2024-06-10 10:14:00.197933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.636 [2024-06-10 10:14:00.197938] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.636 [2024-06-10 10:14:00.197946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.636 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.894 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.894 "name": "Existed_Raid", 00:09:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.894 "strip_size_kb": 64, 00:09:54.894 "state": "configuring", 00:09:54.894 "raid_level": "concat", 00:09:54.894 "superblock": false, 00:09:54.894 "num_base_bdevs": 3, 00:09:54.894 "num_base_bdevs_discovered": 1, 00:09:54.894 "num_base_bdevs_operational": 3, 00:09:54.894 "base_bdevs_list": [ 00:09:54.894 { 00:09:54.894 "name": "BaseBdev1", 00:09:54.894 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:54.894 "is_configured": true, 00:09:54.894 "data_offset": 0, 00:09:54.894 "data_size": 65536 00:09:54.894 }, 00:09:54.894 { 00:09:54.894 "name": "BaseBdev2", 00:09:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.894 "is_configured": false, 00:09:54.894 "data_offset": 0, 00:09:54.894 "data_size": 0 00:09:54.894 }, 00:09:54.894 { 00:09:54.894 "name": "BaseBdev3", 00:09:54.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.894 "is_configured": false, 00:09:54.894 "data_offset": 0, 00:09:54.894 "data_size": 0 00:09:54.894 } 00:09:54.894 ] 00:09:54.894 }' 00:09:54.894 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.894 10:14:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.152 10:14:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.409 [2024-06-10 10:14:01.009326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.667 BaseBdev2 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:55.667 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:55.924 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.181 [ 00:09:56.181 { 00:09:56.181 "name": "BaseBdev2", 00:09:56.181 "aliases": [ 00:09:56.181 "2842224d-2712-11ef-b084-113036b5c18d" 00:09:56.181 ], 00:09:56.181 "product_name": "Malloc disk", 00:09:56.181 "block_size": 512, 00:09:56.181 "num_blocks": 65536, 00:09:56.181 "uuid": "2842224d-2712-11ef-b084-113036b5c18d", 00:09:56.181 "assigned_rate_limits": { 00:09:56.181 "rw_ios_per_sec": 0, 00:09:56.181 "rw_mbytes_per_sec": 0, 00:09:56.181 "r_mbytes_per_sec": 0, 00:09:56.181 "w_mbytes_per_sec": 0 00:09:56.181 }, 00:09:56.181 "claimed": true, 00:09:56.181 "claim_type": "exclusive_write", 00:09:56.181 "zoned": false, 00:09:56.181 "supported_io_types": { 00:09:56.181 "read": true, 00:09:56.181 "write": true, 00:09:56.181 "unmap": true, 00:09:56.181 "write_zeroes": true, 00:09:56.181 "flush": true, 00:09:56.181 "reset": true, 00:09:56.181 "compare": false, 00:09:56.181 "compare_and_write": false, 00:09:56.181 "abort": true, 00:09:56.181 "nvme_admin": false, 00:09:56.181 "nvme_io": false 00:09:56.181 }, 00:09:56.181 "memory_domains": [ 00:09:56.181 { 00:09:56.181 "dma_device_id": "system", 00:09:56.181 "dma_device_type": 1 00:09:56.181 }, 00:09:56.181 { 00:09:56.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.181 "dma_device_type": 2 00:09:56.181 } 00:09:56.181 ], 00:09:56.181 "driver_specific": {} 00:09:56.181 } 00:09:56.181 ] 00:09:56.181 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.182 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.440 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:56.440 "name": "Existed_Raid", 00:09:56.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.440 "strip_size_kb": 64, 00:09:56.440 "state": "configuring", 00:09:56.440 "raid_level": "concat", 00:09:56.440 "superblock": false, 00:09:56.440 "num_base_bdevs": 3, 00:09:56.440 "num_base_bdevs_discovered": 2, 00:09:56.440 "num_base_bdevs_operational": 3, 00:09:56.440 "base_bdevs_list": [ 00:09:56.440 { 00:09:56.440 "name": "BaseBdev1", 00:09:56.441 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:56.441 "is_configured": true, 00:09:56.441 "data_offset": 0, 00:09:56.441 "data_size": 65536 00:09:56.441 }, 00:09:56.441 { 00:09:56.441 "name": "BaseBdev2", 00:09:56.441 "uuid": "2842224d-2712-11ef-b084-113036b5c18d", 00:09:56.441 "is_configured": true, 00:09:56.441 "data_offset": 0, 00:09:56.441 "data_size": 65536 00:09:56.441 }, 00:09:56.441 { 00:09:56.441 "name": "BaseBdev3", 00:09:56.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.441 "is_configured": false, 00:09:56.441 "data_offset": 0, 00:09:56.441 "data_size": 0 00:09:56.441 } 00:09:56.441 ] 00:09:56.441 }' 00:09:56.441 10:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:56.441 10:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.699 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.957 [2024-06-10 10:14:02.477345] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.957 [2024-06-10 10:14:02.477373] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abbaa00 00:09:56.957 [2024-06-10 10:14:02.477378] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:56.957 [2024-06-10 10:14:02.477398] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac1dec0 00:09:56.957 [2024-06-10 10:14:02.477487] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abbaa00 00:09:56.957 [2024-06-10 10:14:02.477491] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82abbaa00 00:09:56.957 [2024-06-10 10:14:02.477520] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.957 BaseBdev3 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:09:56.957 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:57.214 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.471 [ 00:09:57.471 { 00:09:57.471 "name": "BaseBdev3", 00:09:57.471 "aliases": [ 00:09:57.471 "29222367-2712-11ef-b084-113036b5c18d" 00:09:57.471 ], 00:09:57.471 "product_name": "Malloc disk", 00:09:57.471 "block_size": 512, 00:09:57.471 "num_blocks": 65536, 00:09:57.471 "uuid": "29222367-2712-11ef-b084-113036b5c18d", 00:09:57.471 "assigned_rate_limits": { 00:09:57.471 "rw_ios_per_sec": 0, 00:09:57.471 "rw_mbytes_per_sec": 0, 00:09:57.471 "r_mbytes_per_sec": 0, 00:09:57.471 "w_mbytes_per_sec": 0 00:09:57.471 }, 00:09:57.471 "claimed": true, 00:09:57.471 "claim_type": "exclusive_write", 00:09:57.471 "zoned": false, 00:09:57.471 "supported_io_types": { 00:09:57.471 "read": true, 00:09:57.471 "write": true, 00:09:57.471 "unmap": true, 00:09:57.471 "write_zeroes": true, 00:09:57.471 "flush": true, 00:09:57.471 "reset": true, 00:09:57.471 "compare": false, 00:09:57.471 "compare_and_write": false, 00:09:57.471 "abort": true, 00:09:57.471 "nvme_admin": false, 00:09:57.471 "nvme_io": false 00:09:57.471 }, 00:09:57.471 "memory_domains": [ 00:09:57.471 { 00:09:57.471 "dma_device_id": "system", 00:09:57.471 "dma_device_type": 1 00:09:57.471 }, 00:09:57.471 { 00:09:57.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.471 "dma_device_type": 2 00:09:57.471 } 00:09:57.471 ], 00:09:57.471 "driver_specific": {} 00:09:57.471 } 00:09:57.471 ] 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.471 10:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:58.901 "name": "Existed_Raid", 00:09:58.901 "uuid": "292228cc-2712-11ef-b084-113036b5c18d", 00:09:58.901 "strip_size_kb": 64, 00:09:58.901 "state": "online", 00:09:58.901 "raid_level": "concat", 00:09:58.901 "superblock": false, 00:09:58.901 "num_base_bdevs": 3, 00:09:58.901 "num_base_bdevs_discovered": 3, 00:09:58.901 "num_base_bdevs_operational": 3, 00:09:58.901 "base_bdevs_list": [ 00:09:58.901 { 00:09:58.901 "name": "BaseBdev1", 00:09:58.901 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:58.901 "is_configured": true, 00:09:58.901 "data_offset": 0, 00:09:58.901 "data_size": 65536 00:09:58.901 }, 00:09:58.901 { 00:09:58.901 "name": "BaseBdev2", 00:09:58.901 "uuid": "2842224d-2712-11ef-b084-113036b5c18d", 00:09:58.901 "is_configured": true, 00:09:58.901 "data_offset": 0, 00:09:58.901 "data_size": 65536 00:09:58.901 }, 00:09:58.901 { 00:09:58.901 "name": "BaseBdev3", 00:09:58.901 "uuid": "29222367-2712-11ef-b084-113036b5c18d", 00:09:58.901 "is_configured": true, 00:09:58.901 "data_offset": 0, 00:09:58.901 "data_size": 65536 00:09:58.901 } 00:09:58.901 ] 00:09:58.901 }' 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:58.901 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:58.902 [2024-06-10 10:14:03.721291] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:58.902 "name": "Existed_Raid", 00:09:58.902 "aliases": [ 00:09:58.902 "292228cc-2712-11ef-b084-113036b5c18d" 00:09:58.902 ], 00:09:58.902 "product_name": "Raid Volume", 00:09:58.902 "block_size": 512, 00:09:58.902 "num_blocks": 196608, 00:09:58.902 "uuid": "292228cc-2712-11ef-b084-113036b5c18d", 00:09:58.902 "assigned_rate_limits": { 00:09:58.902 "rw_ios_per_sec": 0, 00:09:58.902 "rw_mbytes_per_sec": 0, 00:09:58.902 "r_mbytes_per_sec": 0, 00:09:58.902 "w_mbytes_per_sec": 0 00:09:58.902 }, 00:09:58.902 "claimed": false, 00:09:58.902 "zoned": false, 00:09:58.902 "supported_io_types": { 00:09:58.902 "read": true, 00:09:58.902 "write": true, 00:09:58.902 "unmap": true, 00:09:58.902 "write_zeroes": true, 00:09:58.902 "flush": true, 00:09:58.902 "reset": true, 00:09:58.902 "compare": false, 00:09:58.902 "compare_and_write": false, 00:09:58.902 "abort": false, 00:09:58.902 "nvme_admin": false, 00:09:58.902 "nvme_io": false 00:09:58.902 }, 00:09:58.902 "memory_domains": [ 00:09:58.902 { 00:09:58.902 "dma_device_id": "system", 00:09:58.902 "dma_device_type": 1 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.902 "dma_device_type": 2 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "dma_device_id": "system", 00:09:58.902 "dma_device_type": 1 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.902 "dma_device_type": 2 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "dma_device_id": "system", 00:09:58.902 "dma_device_type": 1 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.902 "dma_device_type": 2 00:09:58.902 } 00:09:58.902 ], 00:09:58.902 "driver_specific": { 00:09:58.902 "raid": { 00:09:58.902 "uuid": "292228cc-2712-11ef-b084-113036b5c18d", 00:09:58.902 "strip_size_kb": 64, 00:09:58.902 "state": "online", 00:09:58.902 "raid_level": "concat", 00:09:58.902 "superblock": false, 00:09:58.902 "num_base_bdevs": 3, 00:09:58.902 "num_base_bdevs_discovered": 3, 00:09:58.902 "num_base_bdevs_operational": 3, 00:09:58.902 "base_bdevs_list": [ 00:09:58.902 { 00:09:58.902 "name": "BaseBdev1", 00:09:58.902 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:58.902 "is_configured": true, 00:09:58.902 "data_offset": 0, 00:09:58.902 "data_size": 65536 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "name": "BaseBdev2", 00:09:58.902 "uuid": "2842224d-2712-11ef-b084-113036b5c18d", 00:09:58.902 "is_configured": true, 00:09:58.902 "data_offset": 0, 00:09:58.902 "data_size": 65536 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "name": "BaseBdev3", 00:09:58.902 "uuid": "29222367-2712-11ef-b084-113036b5c18d", 00:09:58.902 "is_configured": true, 00:09:58.902 "data_offset": 0, 00:09:58.902 "data_size": 65536 00:09:58.902 } 00:09:58.902 ] 00:09:58.902 } 00:09:58.902 } 00:09:58.902 }' 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:58.902 BaseBdev2 00:09:58.902 BaseBdev3' 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:58.902 10:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:58.902 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:58.902 "name": "BaseBdev1", 00:09:58.902 "aliases": [ 00:09:58.902 "26bc1e15-2712-11ef-b084-113036b5c18d" 00:09:58.902 ], 00:09:58.902 "product_name": "Malloc disk", 00:09:58.902 "block_size": 512, 00:09:58.902 "num_blocks": 65536, 00:09:58.902 "uuid": "26bc1e15-2712-11ef-b084-113036b5c18d", 00:09:58.902 "assigned_rate_limits": { 00:09:58.902 "rw_ios_per_sec": 0, 00:09:58.902 "rw_mbytes_per_sec": 0, 00:09:58.902 "r_mbytes_per_sec": 0, 00:09:58.902 "w_mbytes_per_sec": 0 00:09:58.902 }, 00:09:58.902 "claimed": true, 00:09:58.902 "claim_type": "exclusive_write", 00:09:58.902 "zoned": false, 00:09:58.902 "supported_io_types": { 00:09:58.902 "read": true, 00:09:58.902 "write": true, 00:09:58.902 "unmap": true, 00:09:58.902 "write_zeroes": true, 00:09:58.902 "flush": true, 00:09:58.902 "reset": true, 00:09:58.902 "compare": false, 00:09:58.902 "compare_and_write": false, 00:09:58.902 "abort": true, 00:09:58.902 "nvme_admin": false, 00:09:58.902 "nvme_io": false 00:09:58.902 }, 00:09:58.902 "memory_domains": [ 00:09:58.902 { 00:09:58.902 "dma_device_id": "system", 00:09:58.902 "dma_device_type": 1 00:09:58.902 }, 00:09:58.902 { 00:09:58.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.903 "dma_device_type": 2 00:09:58.903 } 00:09:58.903 ], 00:09:58.903 "driver_specific": {} 00:09:58.903 }' 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:58.903 "name": "BaseBdev2", 00:09:58.903 "aliases": [ 00:09:58.903 "2842224d-2712-11ef-b084-113036b5c18d" 00:09:58.903 ], 00:09:58.903 "product_name": "Malloc disk", 00:09:58.903 "block_size": 512, 00:09:58.903 "num_blocks": 65536, 00:09:58.903 "uuid": "2842224d-2712-11ef-b084-113036b5c18d", 00:09:58.903 "assigned_rate_limits": { 00:09:58.903 "rw_ios_per_sec": 0, 00:09:58.903 "rw_mbytes_per_sec": 0, 00:09:58.903 "r_mbytes_per_sec": 0, 00:09:58.903 "w_mbytes_per_sec": 0 00:09:58.903 }, 00:09:58.903 "claimed": true, 00:09:58.903 "claim_type": "exclusive_write", 00:09:58.903 "zoned": false, 00:09:58.903 "supported_io_types": { 00:09:58.903 "read": true, 00:09:58.903 "write": true, 00:09:58.903 "unmap": true, 00:09:58.903 "write_zeroes": true, 00:09:58.903 "flush": true, 00:09:58.903 "reset": true, 00:09:58.903 "compare": false, 00:09:58.903 "compare_and_write": false, 00:09:58.903 "abort": true, 00:09:58.903 "nvme_admin": false, 00:09:58.903 "nvme_io": false 00:09:58.903 }, 00:09:58.903 "memory_domains": [ 00:09:58.903 { 00:09:58.903 "dma_device_id": "system", 00:09:58.903 "dma_device_type": 1 00:09:58.903 }, 00:09:58.903 { 00:09:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.903 "dma_device_type": 2 00:09:58.903 } 00:09:58.903 ], 00:09:58.903 "driver_specific": {} 00:09:58.903 }' 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.903 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:59.164 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:59.442 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:59.442 "name": "BaseBdev3", 00:09:59.442 "aliases": [ 00:09:59.442 "29222367-2712-11ef-b084-113036b5c18d" 00:09:59.442 ], 00:09:59.442 "product_name": "Malloc disk", 00:09:59.442 "block_size": 512, 00:09:59.442 "num_blocks": 65536, 00:09:59.442 "uuid": "29222367-2712-11ef-b084-113036b5c18d", 00:09:59.442 "assigned_rate_limits": { 00:09:59.442 "rw_ios_per_sec": 0, 00:09:59.442 "rw_mbytes_per_sec": 0, 00:09:59.442 "r_mbytes_per_sec": 0, 00:09:59.442 "w_mbytes_per_sec": 0 00:09:59.442 }, 00:09:59.443 "claimed": true, 00:09:59.443 "claim_type": "exclusive_write", 00:09:59.443 "zoned": false, 00:09:59.443 "supported_io_types": { 00:09:59.443 "read": true, 00:09:59.443 "write": true, 00:09:59.443 "unmap": true, 00:09:59.443 "write_zeroes": true, 00:09:59.443 "flush": true, 00:09:59.443 "reset": true, 00:09:59.443 "compare": false, 00:09:59.443 "compare_and_write": false, 00:09:59.443 "abort": true, 00:09:59.443 "nvme_admin": false, 00:09:59.443 "nvme_io": false 00:09:59.443 }, 00:09:59.443 "memory_domains": [ 00:09:59.443 { 00:09:59.443 "dma_device_id": "system", 00:09:59.443 "dma_device_type": 1 00:09:59.443 }, 00:09:59.443 { 00:09:59.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.443 "dma_device_type": 2 00:09:59.443 } 00:09:59.443 ], 00:09:59.443 "driver_specific": {} 00:09:59.443 }' 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:59.443 10:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:59.701 [2024-06-10 10:14:05.093298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.701 [2024-06-10 10:14:05.093330] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.701 [2024-06-10 10:14:05.093344] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.701 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.959 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.959 "name": "Existed_Raid", 00:09:59.959 "uuid": "292228cc-2712-11ef-b084-113036b5c18d", 00:09:59.959 "strip_size_kb": 64, 00:09:59.959 "state": "offline", 00:09:59.959 "raid_level": "concat", 00:09:59.959 "superblock": false, 00:09:59.959 "num_base_bdevs": 3, 00:09:59.959 "num_base_bdevs_discovered": 2, 00:09:59.959 "num_base_bdevs_operational": 2, 00:09:59.959 "base_bdevs_list": [ 00:09:59.959 { 00:09:59.959 "name": null, 00:09:59.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.959 "is_configured": false, 00:09:59.959 "data_offset": 0, 00:09:59.959 "data_size": 65536 00:09:59.959 }, 00:09:59.959 { 00:09:59.959 "name": "BaseBdev2", 00:09:59.959 "uuid": "2842224d-2712-11ef-b084-113036b5c18d", 00:09:59.959 "is_configured": true, 00:09:59.959 "data_offset": 0, 00:09:59.959 "data_size": 65536 00:09:59.959 }, 00:09:59.959 { 00:09:59.959 "name": "BaseBdev3", 00:09:59.959 "uuid": "29222367-2712-11ef-b084-113036b5c18d", 00:09:59.959 "is_configured": true, 00:09:59.959 "data_offset": 0, 00:09:59.959 "data_size": 65536 00:09:59.959 } 00:09:59.959 ] 00:09:59.959 }' 00:09:59.959 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.959 10:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.217 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:00.217 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:00.217 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.217 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:00.475 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:00.475 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.475 10:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:00.733 [2024-06-10 10:14:06.254163] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.733 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:00.733 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:00.733 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.733 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:00.990 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:00.990 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.990 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:01.248 [2024-06-10 10:14:06.802988] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.248 [2024-06-10 10:14:06.803030] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abbaa00 name Existed_Raid, state offline 00:10:01.248 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:01.248 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:01.248 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.248 10:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.837 BaseBdev2 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:01.837 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:02.138 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.397 [ 00:10:02.397 { 00:10:02.397 "name": "BaseBdev2", 00:10:02.397 "aliases": [ 00:10:02.397 "2c081cc8-2712-11ef-b084-113036b5c18d" 00:10:02.397 ], 00:10:02.397 "product_name": "Malloc disk", 00:10:02.397 "block_size": 512, 00:10:02.397 "num_blocks": 65536, 00:10:02.397 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:02.397 "assigned_rate_limits": { 00:10:02.397 "rw_ios_per_sec": 0, 00:10:02.397 "rw_mbytes_per_sec": 0, 00:10:02.397 "r_mbytes_per_sec": 0, 00:10:02.397 "w_mbytes_per_sec": 0 00:10:02.397 }, 00:10:02.397 "claimed": false, 00:10:02.397 "zoned": false, 00:10:02.397 "supported_io_types": { 00:10:02.397 "read": true, 00:10:02.397 "write": true, 00:10:02.397 "unmap": true, 00:10:02.397 "write_zeroes": true, 00:10:02.397 "flush": true, 00:10:02.397 "reset": true, 00:10:02.397 "compare": false, 00:10:02.397 "compare_and_write": false, 00:10:02.397 "abort": true, 00:10:02.397 "nvme_admin": false, 00:10:02.397 "nvme_io": false 00:10:02.397 }, 00:10:02.397 "memory_domains": [ 00:10:02.397 { 00:10:02.397 "dma_device_id": "system", 00:10:02.397 "dma_device_type": 1 00:10:02.397 }, 00:10:02.397 { 00:10:02.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.397 "dma_device_type": 2 00:10:02.397 } 00:10:02.397 ], 00:10:02.397 "driver_specific": {} 00:10:02.397 } 00:10:02.397 ] 00:10:02.397 10:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:10:02.397 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:02.397 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:02.397 10:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.657 BaseBdev3 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:02.657 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:02.916 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.174 [ 00:10:03.174 { 00:10:03.174 "name": "BaseBdev3", 00:10:03.174 "aliases": [ 00:10:03.174 "2c8193de-2712-11ef-b084-113036b5c18d" 00:10:03.174 ], 00:10:03.174 "product_name": "Malloc disk", 00:10:03.174 "block_size": 512, 00:10:03.174 "num_blocks": 65536, 00:10:03.174 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:03.174 "assigned_rate_limits": { 00:10:03.174 "rw_ios_per_sec": 0, 00:10:03.174 "rw_mbytes_per_sec": 0, 00:10:03.174 "r_mbytes_per_sec": 0, 00:10:03.174 "w_mbytes_per_sec": 0 00:10:03.174 }, 00:10:03.174 "claimed": false, 00:10:03.174 "zoned": false, 00:10:03.174 "supported_io_types": { 00:10:03.174 "read": true, 00:10:03.174 "write": true, 00:10:03.174 "unmap": true, 00:10:03.174 "write_zeroes": true, 00:10:03.174 "flush": true, 00:10:03.174 "reset": true, 00:10:03.174 "compare": false, 00:10:03.174 "compare_and_write": false, 00:10:03.174 "abort": true, 00:10:03.174 "nvme_admin": false, 00:10:03.174 "nvme_io": false 00:10:03.174 }, 00:10:03.174 "memory_domains": [ 00:10:03.174 { 00:10:03.174 "dma_device_id": "system", 00:10:03.174 "dma_device_type": 1 00:10:03.174 }, 00:10:03.174 { 00:10:03.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.174 "dma_device_type": 2 00:10:03.174 } 00:10:03.174 ], 00:10:03.174 "driver_specific": {} 00:10:03.174 } 00:10:03.174 ] 00:10:03.174 10:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:10:03.174 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:03.174 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:03.174 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:03.433 [2024-06-10 10:14:08.951898] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.433 [2024-06-10 10:14:08.951961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.433 [2024-06-10 10:14:08.951971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.433 [2024-06-10 10:14:08.952446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.433 10:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.692 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:03.692 "name": "Existed_Raid", 00:10:03.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.692 "strip_size_kb": 64, 00:10:03.692 "state": "configuring", 00:10:03.692 "raid_level": "concat", 00:10:03.692 "superblock": false, 00:10:03.692 "num_base_bdevs": 3, 00:10:03.692 "num_base_bdevs_discovered": 2, 00:10:03.692 "num_base_bdevs_operational": 3, 00:10:03.692 "base_bdevs_list": [ 00:10:03.692 { 00:10:03.692 "name": "BaseBdev1", 00:10:03.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.692 "is_configured": false, 00:10:03.692 "data_offset": 0, 00:10:03.692 "data_size": 0 00:10:03.692 }, 00:10:03.692 { 00:10:03.692 "name": "BaseBdev2", 00:10:03.693 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:03.693 "is_configured": true, 00:10:03.693 "data_offset": 0, 00:10:03.693 "data_size": 65536 00:10:03.693 }, 00:10:03.693 { 00:10:03.693 "name": "BaseBdev3", 00:10:03.693 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:03.693 "is_configured": true, 00:10:03.693 "data_offset": 0, 00:10:03.693 "data_size": 65536 00:10:03.693 } 00:10:03.693 ] 00:10:03.693 }' 00:10:03.693 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:03.693 10:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.259 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:04.516 [2024-06-10 10:14:09.883909] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.516 10:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.774 10:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:04.774 "name": "Existed_Raid", 00:10:04.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.774 "strip_size_kb": 64, 00:10:04.774 "state": "configuring", 00:10:04.774 "raid_level": "concat", 00:10:04.774 "superblock": false, 00:10:04.774 "num_base_bdevs": 3, 00:10:04.774 "num_base_bdevs_discovered": 1, 00:10:04.774 "num_base_bdevs_operational": 3, 00:10:04.774 "base_bdevs_list": [ 00:10:04.774 { 00:10:04.774 "name": "BaseBdev1", 00:10:04.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.774 "is_configured": false, 00:10:04.774 "data_offset": 0, 00:10:04.774 "data_size": 0 00:10:04.774 }, 00:10:04.774 { 00:10:04.774 "name": null, 00:10:04.774 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:04.774 "is_configured": false, 00:10:04.774 "data_offset": 0, 00:10:04.774 "data_size": 65536 00:10:04.774 }, 00:10:04.774 { 00:10:04.774 "name": "BaseBdev3", 00:10:04.774 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:04.774 "is_configured": true, 00:10:04.774 "data_offset": 0, 00:10:04.774 "data_size": 65536 00:10:04.774 } 00:10:04.774 ] 00:10:04.774 }' 00:10:04.774 10:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:04.774 10:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.032 10:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.032 10:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.290 10:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:05.290 10:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.558 [2024-06-10 10:14:11.052078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.558 BaseBdev1 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:05.558 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:05.835 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.102 [ 00:10:06.102 { 00:10:06.102 "name": "BaseBdev1", 00:10:06.102 "aliases": [ 00:10:06.102 "2e3e8989-2712-11ef-b084-113036b5c18d" 00:10:06.102 ], 00:10:06.102 "product_name": "Malloc disk", 00:10:06.102 "block_size": 512, 00:10:06.102 "num_blocks": 65536, 00:10:06.102 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:06.102 "assigned_rate_limits": { 00:10:06.102 "rw_ios_per_sec": 0, 00:10:06.102 "rw_mbytes_per_sec": 0, 00:10:06.102 "r_mbytes_per_sec": 0, 00:10:06.102 "w_mbytes_per_sec": 0 00:10:06.102 }, 00:10:06.102 "claimed": true, 00:10:06.102 "claim_type": "exclusive_write", 00:10:06.102 "zoned": false, 00:10:06.102 "supported_io_types": { 00:10:06.102 "read": true, 00:10:06.102 "write": true, 00:10:06.102 "unmap": true, 00:10:06.102 "write_zeroes": true, 00:10:06.102 "flush": true, 00:10:06.102 "reset": true, 00:10:06.102 "compare": false, 00:10:06.102 "compare_and_write": false, 00:10:06.102 "abort": true, 00:10:06.102 "nvme_admin": false, 00:10:06.102 "nvme_io": false 00:10:06.102 }, 00:10:06.102 "memory_domains": [ 00:10:06.102 { 00:10:06.102 "dma_device_id": "system", 00:10:06.102 "dma_device_type": 1 00:10:06.102 }, 00:10:06.102 { 00:10:06.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.102 "dma_device_type": 2 00:10:06.102 } 00:10:06.102 ], 00:10:06.102 "driver_specific": {} 00:10:06.102 } 00:10:06.102 ] 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.102 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.361 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.361 "name": "Existed_Raid", 00:10:06.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.361 "strip_size_kb": 64, 00:10:06.361 "state": "configuring", 00:10:06.361 "raid_level": "concat", 00:10:06.361 "superblock": false, 00:10:06.361 "num_base_bdevs": 3, 00:10:06.361 "num_base_bdevs_discovered": 2, 00:10:06.361 "num_base_bdevs_operational": 3, 00:10:06.361 "base_bdevs_list": [ 00:10:06.361 { 00:10:06.361 "name": "BaseBdev1", 00:10:06.361 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:06.361 "is_configured": true, 00:10:06.361 "data_offset": 0, 00:10:06.361 "data_size": 65536 00:10:06.361 }, 00:10:06.361 { 00:10:06.361 "name": null, 00:10:06.361 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:06.361 "is_configured": false, 00:10:06.361 "data_offset": 0, 00:10:06.361 "data_size": 65536 00:10:06.361 }, 00:10:06.361 { 00:10:06.361 "name": "BaseBdev3", 00:10:06.361 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:06.361 "is_configured": true, 00:10:06.361 "data_offset": 0, 00:10:06.361 "data_size": 65536 00:10:06.361 } 00:10:06.361 ] 00:10:06.361 }' 00:10:06.361 10:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.361 10:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.646 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.646 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.932 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:06.932 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:07.190 [2024-06-10 10:14:12.724004] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.190 10:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.448 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:07.448 "name": "Existed_Raid", 00:10:07.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.448 "strip_size_kb": 64, 00:10:07.448 "state": "configuring", 00:10:07.448 "raid_level": "concat", 00:10:07.448 "superblock": false, 00:10:07.448 "num_base_bdevs": 3, 00:10:07.448 "num_base_bdevs_discovered": 1, 00:10:07.448 "num_base_bdevs_operational": 3, 00:10:07.448 "base_bdevs_list": [ 00:10:07.448 { 00:10:07.448 "name": "BaseBdev1", 00:10:07.448 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:07.448 "is_configured": true, 00:10:07.448 "data_offset": 0, 00:10:07.448 "data_size": 65536 00:10:07.448 }, 00:10:07.448 { 00:10:07.449 "name": null, 00:10:07.449 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:07.449 "is_configured": false, 00:10:07.449 "data_offset": 0, 00:10:07.449 "data_size": 65536 00:10:07.449 }, 00:10:07.449 { 00:10:07.449 "name": null, 00:10:07.449 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:07.449 "is_configured": false, 00:10:07.449 "data_offset": 0, 00:10:07.449 "data_size": 65536 00:10:07.449 } 00:10:07.449 ] 00:10:07.449 }' 00:10:07.449 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:07.449 10:14:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.707 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.707 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.275 [2024-06-10 10:14:13.828038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.275 10:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.533 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.533 "name": "Existed_Raid", 00:10:08.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.533 "strip_size_kb": 64, 00:10:08.533 "state": "configuring", 00:10:08.533 "raid_level": "concat", 00:10:08.533 "superblock": false, 00:10:08.533 "num_base_bdevs": 3, 00:10:08.533 "num_base_bdevs_discovered": 2, 00:10:08.533 "num_base_bdevs_operational": 3, 00:10:08.533 "base_bdevs_list": [ 00:10:08.533 { 00:10:08.533 "name": "BaseBdev1", 00:10:08.533 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:08.533 "is_configured": true, 00:10:08.533 "data_offset": 0, 00:10:08.533 "data_size": 65536 00:10:08.533 }, 00:10:08.533 { 00:10:08.533 "name": null, 00:10:08.533 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:08.533 "is_configured": false, 00:10:08.533 "data_offset": 0, 00:10:08.533 "data_size": 65536 00:10:08.533 }, 00:10:08.533 { 00:10:08.533 "name": "BaseBdev3", 00:10:08.533 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:08.533 "is_configured": true, 00:10:08.533 "data_offset": 0, 00:10:08.533 "data_size": 65536 00:10:08.533 } 00:10:08.533 ] 00:10:08.533 }' 00:10:08.533 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.533 10:14:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.101 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.101 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.101 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:09.101 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:09.360 [2024-06-10 10:14:14.864083] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.360 10:14:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.618 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:09.618 "name": "Existed_Raid", 00:10:09.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.618 "strip_size_kb": 64, 00:10:09.618 "state": "configuring", 00:10:09.618 "raid_level": "concat", 00:10:09.618 "superblock": false, 00:10:09.618 "num_base_bdevs": 3, 00:10:09.618 "num_base_bdevs_discovered": 1, 00:10:09.618 "num_base_bdevs_operational": 3, 00:10:09.618 "base_bdevs_list": [ 00:10:09.618 { 00:10:09.618 "name": null, 00:10:09.618 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:09.618 "is_configured": false, 00:10:09.618 "data_offset": 0, 00:10:09.618 "data_size": 65536 00:10:09.618 }, 00:10:09.618 { 00:10:09.618 "name": null, 00:10:09.618 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:09.618 "is_configured": false, 00:10:09.618 "data_offset": 0, 00:10:09.618 "data_size": 65536 00:10:09.618 }, 00:10:09.618 { 00:10:09.618 "name": "BaseBdev3", 00:10:09.618 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:09.618 "is_configured": true, 00:10:09.618 "data_offset": 0, 00:10:09.618 "data_size": 65536 00:10:09.618 } 00:10:09.618 ] 00:10:09.618 }' 00:10:09.618 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:09.618 10:14:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.876 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.876 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.134 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:10.134 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:10.393 [2024-06-10 10:14:15.840885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.393 10:14:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.651 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.651 "name": "Existed_Raid", 00:10:10.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.651 "strip_size_kb": 64, 00:10:10.651 "state": "configuring", 00:10:10.651 "raid_level": "concat", 00:10:10.651 "superblock": false, 00:10:10.651 "num_base_bdevs": 3, 00:10:10.651 "num_base_bdevs_discovered": 2, 00:10:10.651 "num_base_bdevs_operational": 3, 00:10:10.651 "base_bdevs_list": [ 00:10:10.651 { 00:10:10.651 "name": null, 00:10:10.651 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:10.651 "is_configured": false, 00:10:10.651 "data_offset": 0, 00:10:10.651 "data_size": 65536 00:10:10.651 }, 00:10:10.651 { 00:10:10.651 "name": "BaseBdev2", 00:10:10.651 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:10.651 "is_configured": true, 00:10:10.651 "data_offset": 0, 00:10:10.651 "data_size": 65536 00:10:10.651 }, 00:10:10.651 { 00:10:10.651 "name": "BaseBdev3", 00:10:10.651 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:10.651 "is_configured": true, 00:10:10.651 "data_offset": 0, 00:10:10.651 "data_size": 65536 00:10:10.651 } 00:10:10.651 ] 00:10:10.651 }' 00:10:10.651 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.651 10:14:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.909 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.909 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.166 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:11.166 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.166 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:11.424 10:14:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2e3e8989-2712-11ef-b084-113036b5c18d 00:10:11.682 [2024-06-10 10:14:17.201044] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:11.682 [2024-06-10 10:14:17.201072] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abbaa00 00:10:11.682 [2024-06-10 10:14:17.201077] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:11.682 [2024-06-10 10:14:17.201097] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac1de20 00:10:11.682 [2024-06-10 10:14:17.201156] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abbaa00 00:10:11.682 [2024-06-10 10:14:17.201160] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82abbaa00 00:10:11.682 [2024-06-10 10:14:17.201189] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.682 NewBaseBdev 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:11.682 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:11.940 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.198 [ 00:10:12.198 { 00:10:12.198 "name": "NewBaseBdev", 00:10:12.198 "aliases": [ 00:10:12.198 "2e3e8989-2712-11ef-b084-113036b5c18d" 00:10:12.198 ], 00:10:12.198 "product_name": "Malloc disk", 00:10:12.198 "block_size": 512, 00:10:12.198 "num_blocks": 65536, 00:10:12.198 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:12.198 "assigned_rate_limits": { 00:10:12.198 "rw_ios_per_sec": 0, 00:10:12.198 "rw_mbytes_per_sec": 0, 00:10:12.198 "r_mbytes_per_sec": 0, 00:10:12.198 "w_mbytes_per_sec": 0 00:10:12.198 }, 00:10:12.198 "claimed": true, 00:10:12.198 "claim_type": "exclusive_write", 00:10:12.198 "zoned": false, 00:10:12.198 "supported_io_types": { 00:10:12.198 "read": true, 00:10:12.198 "write": true, 00:10:12.198 "unmap": true, 00:10:12.198 "write_zeroes": true, 00:10:12.198 "flush": true, 00:10:12.198 "reset": true, 00:10:12.198 "compare": false, 00:10:12.198 "compare_and_write": false, 00:10:12.198 "abort": true, 00:10:12.198 "nvme_admin": false, 00:10:12.198 "nvme_io": false 00:10:12.198 }, 00:10:12.198 "memory_domains": [ 00:10:12.198 { 00:10:12.198 "dma_device_id": "system", 00:10:12.198 "dma_device_type": 1 00:10:12.198 }, 00:10:12.198 { 00:10:12.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.198 "dma_device_type": 2 00:10:12.198 } 00:10:12.198 ], 00:10:12.198 "driver_specific": {} 00:10:12.198 } 00:10:12.198 ] 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.198 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.456 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.456 "name": "Existed_Raid", 00:10:12.456 "uuid": "31e8d14a-2712-11ef-b084-113036b5c18d", 00:10:12.456 "strip_size_kb": 64, 00:10:12.456 "state": "online", 00:10:12.456 "raid_level": "concat", 00:10:12.456 "superblock": false, 00:10:12.456 "num_base_bdevs": 3, 00:10:12.456 "num_base_bdevs_discovered": 3, 00:10:12.456 "num_base_bdevs_operational": 3, 00:10:12.456 "base_bdevs_list": [ 00:10:12.456 { 00:10:12.456 "name": "NewBaseBdev", 00:10:12.456 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:12.456 "is_configured": true, 00:10:12.456 "data_offset": 0, 00:10:12.456 "data_size": 65536 00:10:12.456 }, 00:10:12.456 { 00:10:12.456 "name": "BaseBdev2", 00:10:12.457 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:12.457 "is_configured": true, 00:10:12.457 "data_offset": 0, 00:10:12.457 "data_size": 65536 00:10:12.457 }, 00:10:12.457 { 00:10:12.457 "name": "BaseBdev3", 00:10:12.457 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:12.457 "is_configured": true, 00:10:12.457 "data_offset": 0, 00:10:12.457 "data_size": 65536 00:10:12.457 } 00:10:12.457 ] 00:10:12.457 }' 00:10:12.457 10:14:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.457 10:14:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:12.715 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:12.973 [2024-06-10 10:14:18.440968] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.973 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:12.973 "name": "Existed_Raid", 00:10:12.973 "aliases": [ 00:10:12.973 "31e8d14a-2712-11ef-b084-113036b5c18d" 00:10:12.973 ], 00:10:12.973 "product_name": "Raid Volume", 00:10:12.973 "block_size": 512, 00:10:12.973 "num_blocks": 196608, 00:10:12.973 "uuid": "31e8d14a-2712-11ef-b084-113036b5c18d", 00:10:12.973 "assigned_rate_limits": { 00:10:12.973 "rw_ios_per_sec": 0, 00:10:12.973 "rw_mbytes_per_sec": 0, 00:10:12.973 "r_mbytes_per_sec": 0, 00:10:12.973 "w_mbytes_per_sec": 0 00:10:12.973 }, 00:10:12.973 "claimed": false, 00:10:12.973 "zoned": false, 00:10:12.973 "supported_io_types": { 00:10:12.973 "read": true, 00:10:12.973 "write": true, 00:10:12.973 "unmap": true, 00:10:12.973 "write_zeroes": true, 00:10:12.973 "flush": true, 00:10:12.973 "reset": true, 00:10:12.973 "compare": false, 00:10:12.973 "compare_and_write": false, 00:10:12.973 "abort": false, 00:10:12.973 "nvme_admin": false, 00:10:12.973 "nvme_io": false 00:10:12.973 }, 00:10:12.973 "memory_domains": [ 00:10:12.973 { 00:10:12.973 "dma_device_id": "system", 00:10:12.973 "dma_device_type": 1 00:10:12.973 }, 00:10:12.973 { 00:10:12.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.973 "dma_device_type": 2 00:10:12.973 }, 00:10:12.973 { 00:10:12.973 "dma_device_id": "system", 00:10:12.973 "dma_device_type": 1 00:10:12.973 }, 00:10:12.973 { 00:10:12.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.973 "dma_device_type": 2 00:10:12.973 }, 00:10:12.973 { 00:10:12.973 "dma_device_id": "system", 00:10:12.973 "dma_device_type": 1 00:10:12.973 }, 00:10:12.973 { 00:10:12.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.973 "dma_device_type": 2 00:10:12.973 } 00:10:12.973 ], 00:10:12.973 "driver_specific": { 00:10:12.973 "raid": { 00:10:12.973 "uuid": "31e8d14a-2712-11ef-b084-113036b5c18d", 00:10:12.973 "strip_size_kb": 64, 00:10:12.973 "state": "online", 00:10:12.973 "raid_level": "concat", 00:10:12.973 "superblock": false, 00:10:12.973 "num_base_bdevs": 3, 00:10:12.973 "num_base_bdevs_discovered": 3, 00:10:12.974 "num_base_bdevs_operational": 3, 00:10:12.974 "base_bdevs_list": [ 00:10:12.974 { 00:10:12.974 "name": "NewBaseBdev", 00:10:12.974 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:12.974 "is_configured": true, 00:10:12.974 "data_offset": 0, 00:10:12.974 "data_size": 65536 00:10:12.974 }, 00:10:12.974 { 00:10:12.974 "name": "BaseBdev2", 00:10:12.974 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:12.974 "is_configured": true, 00:10:12.974 "data_offset": 0, 00:10:12.974 "data_size": 65536 00:10:12.974 }, 00:10:12.974 { 00:10:12.974 "name": "BaseBdev3", 00:10:12.974 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:12.974 "is_configured": true, 00:10:12.974 "data_offset": 0, 00:10:12.974 "data_size": 65536 00:10:12.974 } 00:10:12.974 ] 00:10:12.974 } 00:10:12.974 } 00:10:12.974 }' 00:10:12.974 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.974 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:12.974 BaseBdev2 00:10:12.974 BaseBdev3' 00:10:12.974 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:12.974 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:12.974 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.233 "name": "NewBaseBdev", 00:10:13.233 "aliases": [ 00:10:13.233 "2e3e8989-2712-11ef-b084-113036b5c18d" 00:10:13.233 ], 00:10:13.233 "product_name": "Malloc disk", 00:10:13.233 "block_size": 512, 00:10:13.233 "num_blocks": 65536, 00:10:13.233 "uuid": "2e3e8989-2712-11ef-b084-113036b5c18d", 00:10:13.233 "assigned_rate_limits": { 00:10:13.233 "rw_ios_per_sec": 0, 00:10:13.233 "rw_mbytes_per_sec": 0, 00:10:13.233 "r_mbytes_per_sec": 0, 00:10:13.233 "w_mbytes_per_sec": 0 00:10:13.233 }, 00:10:13.233 "claimed": true, 00:10:13.233 "claim_type": "exclusive_write", 00:10:13.233 "zoned": false, 00:10:13.233 "supported_io_types": { 00:10:13.233 "read": true, 00:10:13.233 "write": true, 00:10:13.233 "unmap": true, 00:10:13.233 "write_zeroes": true, 00:10:13.233 "flush": true, 00:10:13.233 "reset": true, 00:10:13.233 "compare": false, 00:10:13.233 "compare_and_write": false, 00:10:13.233 "abort": true, 00:10:13.233 "nvme_admin": false, 00:10:13.233 "nvme_io": false 00:10:13.233 }, 00:10:13.233 "memory_domains": [ 00:10:13.233 { 00:10:13.233 "dma_device_id": "system", 00:10:13.233 "dma_device_type": 1 00:10:13.233 }, 00:10:13.233 { 00:10:13.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.233 "dma_device_type": 2 00:10:13.233 } 00:10:13.233 ], 00:10:13.233 "driver_specific": {} 00:10:13.233 }' 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:13.233 10:14:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:13.491 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:13.491 "name": "BaseBdev2", 00:10:13.491 "aliases": [ 00:10:13.491 "2c081cc8-2712-11ef-b084-113036b5c18d" 00:10:13.491 ], 00:10:13.491 "product_name": "Malloc disk", 00:10:13.491 "block_size": 512, 00:10:13.491 "num_blocks": 65536, 00:10:13.491 "uuid": "2c081cc8-2712-11ef-b084-113036b5c18d", 00:10:13.491 "assigned_rate_limits": { 00:10:13.491 "rw_ios_per_sec": 0, 00:10:13.491 "rw_mbytes_per_sec": 0, 00:10:13.491 "r_mbytes_per_sec": 0, 00:10:13.491 "w_mbytes_per_sec": 0 00:10:13.491 }, 00:10:13.491 "claimed": true, 00:10:13.491 "claim_type": "exclusive_write", 00:10:13.491 "zoned": false, 00:10:13.491 "supported_io_types": { 00:10:13.491 "read": true, 00:10:13.491 "write": true, 00:10:13.491 "unmap": true, 00:10:13.491 "write_zeroes": true, 00:10:13.491 "flush": true, 00:10:13.491 "reset": true, 00:10:13.491 "compare": false, 00:10:13.491 "compare_and_write": false, 00:10:13.491 "abort": true, 00:10:13.491 "nvme_admin": false, 00:10:13.491 "nvme_io": false 00:10:13.491 }, 00:10:13.491 "memory_domains": [ 00:10:13.491 { 00:10:13.491 "dma_device_id": "system", 00:10:13.491 "dma_device_type": 1 00:10:13.491 }, 00:10:13.491 { 00:10:13.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.491 "dma_device_type": 2 00:10:13.491 } 00:10:13.491 ], 00:10:13.491 "driver_specific": {} 00:10:13.491 }' 00:10:13.491 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.491 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:13.492 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.749 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:13.749 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:13.749 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:13.749 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:13.749 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:14.008 "name": "BaseBdev3", 00:10:14.008 "aliases": [ 00:10:14.008 "2c8193de-2712-11ef-b084-113036b5c18d" 00:10:14.008 ], 00:10:14.008 "product_name": "Malloc disk", 00:10:14.008 "block_size": 512, 00:10:14.008 "num_blocks": 65536, 00:10:14.008 "uuid": "2c8193de-2712-11ef-b084-113036b5c18d", 00:10:14.008 "assigned_rate_limits": { 00:10:14.008 "rw_ios_per_sec": 0, 00:10:14.008 "rw_mbytes_per_sec": 0, 00:10:14.008 "r_mbytes_per_sec": 0, 00:10:14.008 "w_mbytes_per_sec": 0 00:10:14.008 }, 00:10:14.008 "claimed": true, 00:10:14.008 "claim_type": "exclusive_write", 00:10:14.008 "zoned": false, 00:10:14.008 "supported_io_types": { 00:10:14.008 "read": true, 00:10:14.008 "write": true, 00:10:14.008 "unmap": true, 00:10:14.008 "write_zeroes": true, 00:10:14.008 "flush": true, 00:10:14.008 "reset": true, 00:10:14.008 "compare": false, 00:10:14.008 "compare_and_write": false, 00:10:14.008 "abort": true, 00:10:14.008 "nvme_admin": false, 00:10:14.008 "nvme_io": false 00:10:14.008 }, 00:10:14.008 "memory_domains": [ 00:10:14.008 { 00:10:14.008 "dma_device_id": "system", 00:10:14.008 "dma_device_type": 1 00:10:14.008 }, 00:10:14.008 { 00:10:14.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.008 "dma_device_type": 2 00:10:14.008 } 00:10:14.008 ], 00:10:14.008 "driver_specific": {} 00:10:14.008 }' 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:14.008 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:14.267 [2024-06-10 10:14:19.692958] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.267 [2024-06-10 10:14:19.692978] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.267 [2024-06-10 10:14:19.692995] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.267 [2024-06-10 10:14:19.693006] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.267 [2024-06-10 10:14:19.693010] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abbaa00 name Existed_Raid, state offline 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 54790 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 54790 ']' 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 54790 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 54790 00:10:14.267 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:10:14.268 killing process with pid 54790 00:10:14.268 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:10:14.268 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 54790' 00:10:14.268 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 54790 00:10:14.268 [2024-06-10 10:14:19.723288] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.268 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 54790 00:10:14.268 [2024-06-10 10:14:19.737676] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:14.528 00:10:14.528 real 0m24.288s 00:10:14.528 user 0m44.567s 00:10:14.528 sys 0m3.207s 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 ************************************ 00:10:14.528 END TEST raid_state_function_test 00:10:14.528 ************************************ 00:10:14.528 10:14:19 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:14.528 10:14:19 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:10:14.528 10:14:19 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:14.528 10:14:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 ************************************ 00:10:14.528 START TEST raid_state_function_test_sb 00:10:14.528 ************************************ 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 3 true 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=55519 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 55519' 00:10:14.528 Process raid pid: 55519 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 55519 /var/tmp/spdk-raid.sock 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 55519 ']' 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:14.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:14.528 10:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.528 [2024-06-10 10:14:19.972966] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:10:14.528 [2024-06-10 10:14:19.973300] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:15.096 EAL: TSC is not safe to use in SMP mode 00:10:15.096 EAL: TSC is not invariant 00:10:15.096 [2024-06-10 10:14:20.459623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.096 [2024-06-10 10:14:20.538171] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:15.096 [2024-06-10 10:14:20.540236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.096 [2024-06-10 10:14:20.540932] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.096 [2024-06-10 10:14:20.540943] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.355 10:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:15.355 10:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:10:15.355 10:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:15.614 [2024-06-10 10:14:21.151242] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.614 [2024-06-10 10:14:21.151292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.614 [2024-06-10 10:14:21.151296] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.614 [2024-06-10 10:14:21.151304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.614 [2024-06-10 10:14:21.151307] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.614 [2024-06-10 10:14:21.151314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.614 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.871 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.871 "name": "Existed_Raid", 00:10:15.871 "uuid": "34439023-2712-11ef-b084-113036b5c18d", 00:10:15.871 "strip_size_kb": 64, 00:10:15.871 "state": "configuring", 00:10:15.871 "raid_level": "concat", 00:10:15.871 "superblock": true, 00:10:15.871 "num_base_bdevs": 3, 00:10:15.871 "num_base_bdevs_discovered": 0, 00:10:15.871 "num_base_bdevs_operational": 3, 00:10:15.871 "base_bdevs_list": [ 00:10:15.871 { 00:10:15.871 "name": "BaseBdev1", 00:10:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.871 "is_configured": false, 00:10:15.871 "data_offset": 0, 00:10:15.871 "data_size": 0 00:10:15.871 }, 00:10:15.871 { 00:10:15.871 "name": "BaseBdev2", 00:10:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.871 "is_configured": false, 00:10:15.871 "data_offset": 0, 00:10:15.871 "data_size": 0 00:10:15.871 }, 00:10:15.871 { 00:10:15.871 "name": "BaseBdev3", 00:10:15.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.871 "is_configured": false, 00:10:15.871 "data_offset": 0, 00:10:15.871 "data_size": 0 00:10:15.871 } 00:10:15.871 ] 00:10:15.871 }' 00:10:15.871 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.871 10:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.129 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:16.388 [2024-06-10 10:14:21.951234] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.388 [2024-06-10 10:14:21.951257] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6fe500 name Existed_Raid, state configuring 00:10:16.388 10:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:16.652 [2024-06-10 10:14:22.171247] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:16.652 [2024-06-10 10:14:22.171295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:16.652 [2024-06-10 10:14:22.171299] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.652 [2024-06-10 10:14:22.171306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.652 [2024-06-10 10:14:22.171309] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:16.652 [2024-06-10 10:14:22.171316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:16.652 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:16.914 [2024-06-10 10:14:22.432183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.914 BaseBdev1 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:16.914 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:17.173 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:17.432 [ 00:10:17.432 { 00:10:17.432 "name": "BaseBdev1", 00:10:17.432 "aliases": [ 00:10:17.432 "3506e136-2712-11ef-b084-113036b5c18d" 00:10:17.432 ], 00:10:17.432 "product_name": "Malloc disk", 00:10:17.432 "block_size": 512, 00:10:17.432 "num_blocks": 65536, 00:10:17.432 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:17.432 "assigned_rate_limits": { 00:10:17.432 "rw_ios_per_sec": 0, 00:10:17.432 "rw_mbytes_per_sec": 0, 00:10:17.432 "r_mbytes_per_sec": 0, 00:10:17.432 "w_mbytes_per_sec": 0 00:10:17.432 }, 00:10:17.432 "claimed": true, 00:10:17.432 "claim_type": "exclusive_write", 00:10:17.432 "zoned": false, 00:10:17.432 "supported_io_types": { 00:10:17.432 "read": true, 00:10:17.432 "write": true, 00:10:17.432 "unmap": true, 00:10:17.432 "write_zeroes": true, 00:10:17.432 "flush": true, 00:10:17.432 "reset": true, 00:10:17.432 "compare": false, 00:10:17.432 "compare_and_write": false, 00:10:17.432 "abort": true, 00:10:17.432 "nvme_admin": false, 00:10:17.432 "nvme_io": false 00:10:17.432 }, 00:10:17.432 "memory_domains": [ 00:10:17.432 { 00:10:17.432 "dma_device_id": "system", 00:10:17.432 "dma_device_type": 1 00:10:17.432 }, 00:10:17.432 { 00:10:17.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.432 "dma_device_type": 2 00:10:17.432 } 00:10:17.432 ], 00:10:17.432 "driver_specific": {} 00:10:17.432 } 00:10:17.432 ] 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.432 10:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.691 10:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:17.691 "name": "Existed_Raid", 00:10:17.691 "uuid": "34df3432-2712-11ef-b084-113036b5c18d", 00:10:17.691 "strip_size_kb": 64, 00:10:17.691 "state": "configuring", 00:10:17.691 "raid_level": "concat", 00:10:17.691 "superblock": true, 00:10:17.691 "num_base_bdevs": 3, 00:10:17.691 "num_base_bdevs_discovered": 1, 00:10:17.691 "num_base_bdevs_operational": 3, 00:10:17.691 "base_bdevs_list": [ 00:10:17.691 { 00:10:17.691 "name": "BaseBdev1", 00:10:17.691 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:17.691 "is_configured": true, 00:10:17.691 "data_offset": 2048, 00:10:17.691 "data_size": 63488 00:10:17.691 }, 00:10:17.691 { 00:10:17.691 "name": "BaseBdev2", 00:10:17.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.691 "is_configured": false, 00:10:17.691 "data_offset": 0, 00:10:17.691 "data_size": 0 00:10:17.691 }, 00:10:17.691 { 00:10:17.691 "name": "BaseBdev3", 00:10:17.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.691 "is_configured": false, 00:10:17.691 "data_offset": 0, 00:10:17.691 "data_size": 0 00:10:17.691 } 00:10:17.691 ] 00:10:17.691 }' 00:10:17.691 10:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:17.692 10:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.259 10:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:18.259 [2024-06-10 10:14:23.795286] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.259 [2024-06-10 10:14:23.795320] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6fe500 name Existed_Raid, state configuring 00:10:18.259 10:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:18.518 [2024-06-10 10:14:24.027316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.518 [2024-06-10 10:14:24.028004] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.518 [2024-06-10 10:14:24.028046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.518 [2024-06-10 10:14:24.028051] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.518 [2024-06-10 10:14:24.028059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.518 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:18.518 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:18.518 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.519 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.778 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.778 "name": "Existed_Raid", 00:10:18.778 "uuid": "35fa6ab1-2712-11ef-b084-113036b5c18d", 00:10:18.778 "strip_size_kb": 64, 00:10:18.778 "state": "configuring", 00:10:18.778 "raid_level": "concat", 00:10:18.778 "superblock": true, 00:10:18.778 "num_base_bdevs": 3, 00:10:18.778 "num_base_bdevs_discovered": 1, 00:10:18.778 "num_base_bdevs_operational": 3, 00:10:18.778 "base_bdevs_list": [ 00:10:18.778 { 00:10:18.778 "name": "BaseBdev1", 00:10:18.778 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:18.778 "is_configured": true, 00:10:18.778 "data_offset": 2048, 00:10:18.778 "data_size": 63488 00:10:18.778 }, 00:10:18.778 { 00:10:18.778 "name": "BaseBdev2", 00:10:18.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.778 "is_configured": false, 00:10:18.778 "data_offset": 0, 00:10:18.778 "data_size": 0 00:10:18.778 }, 00:10:18.778 { 00:10:18.778 "name": "BaseBdev3", 00:10:18.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.778 "is_configured": false, 00:10:18.778 "data_offset": 0, 00:10:18.778 "data_size": 0 00:10:18.778 } 00:10:18.778 ] 00:10:18.778 }' 00:10:18.778 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.778 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.035 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.293 [2024-06-10 10:14:24.731526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.293 BaseBdev2 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:19.293 10:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:19.551 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.809 [ 00:10:19.809 { 00:10:19.809 "name": "BaseBdev2", 00:10:19.810 "aliases": [ 00:10:19.810 "3665da8c-2712-11ef-b084-113036b5c18d" 00:10:19.810 ], 00:10:19.810 "product_name": "Malloc disk", 00:10:19.810 "block_size": 512, 00:10:19.810 "num_blocks": 65536, 00:10:19.810 "uuid": "3665da8c-2712-11ef-b084-113036b5c18d", 00:10:19.810 "assigned_rate_limits": { 00:10:19.810 "rw_ios_per_sec": 0, 00:10:19.810 "rw_mbytes_per_sec": 0, 00:10:19.810 "r_mbytes_per_sec": 0, 00:10:19.810 "w_mbytes_per_sec": 0 00:10:19.810 }, 00:10:19.810 "claimed": true, 00:10:19.810 "claim_type": "exclusive_write", 00:10:19.810 "zoned": false, 00:10:19.810 "supported_io_types": { 00:10:19.810 "read": true, 00:10:19.810 "write": true, 00:10:19.810 "unmap": true, 00:10:19.810 "write_zeroes": true, 00:10:19.810 "flush": true, 00:10:19.810 "reset": true, 00:10:19.810 "compare": false, 00:10:19.810 "compare_and_write": false, 00:10:19.810 "abort": true, 00:10:19.810 "nvme_admin": false, 00:10:19.810 "nvme_io": false 00:10:19.810 }, 00:10:19.810 "memory_domains": [ 00:10:19.810 { 00:10:19.810 "dma_device_id": "system", 00:10:19.810 "dma_device_type": 1 00:10:19.810 }, 00:10:19.810 { 00:10:19.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.810 "dma_device_type": 2 00:10:19.810 } 00:10:19.810 ], 00:10:19.810 "driver_specific": {} 00:10:19.810 } 00:10:19.810 ] 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.810 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.069 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:20.069 "name": "Existed_Raid", 00:10:20.069 "uuid": "35fa6ab1-2712-11ef-b084-113036b5c18d", 00:10:20.069 "strip_size_kb": 64, 00:10:20.069 "state": "configuring", 00:10:20.069 "raid_level": "concat", 00:10:20.069 "superblock": true, 00:10:20.069 "num_base_bdevs": 3, 00:10:20.069 "num_base_bdevs_discovered": 2, 00:10:20.069 "num_base_bdevs_operational": 3, 00:10:20.069 "base_bdevs_list": [ 00:10:20.069 { 00:10:20.069 "name": "BaseBdev1", 00:10:20.069 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:20.069 "is_configured": true, 00:10:20.069 "data_offset": 2048, 00:10:20.069 "data_size": 63488 00:10:20.069 }, 00:10:20.069 { 00:10:20.069 "name": "BaseBdev2", 00:10:20.069 "uuid": "3665da8c-2712-11ef-b084-113036b5c18d", 00:10:20.069 "is_configured": true, 00:10:20.069 "data_offset": 2048, 00:10:20.069 "data_size": 63488 00:10:20.069 }, 00:10:20.069 { 00:10:20.069 "name": "BaseBdev3", 00:10:20.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.069 "is_configured": false, 00:10:20.069 "data_offset": 0, 00:10:20.069 "data_size": 0 00:10:20.069 } 00:10:20.069 ] 00:10:20.069 }' 00:10:20.069 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:20.069 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.328 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.328 [2024-06-10 10:14:25.919542] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.328 [2024-06-10 10:14:25.919598] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b6fea00 00:10:20.328 [2024-06-10 10:14:25.919603] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:20.328 [2024-06-10 10:14:25.919620] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b761ec0 00:10:20.328 [2024-06-10 10:14:25.919657] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b6fea00 00:10:20.328 [2024-06-10 10:14:25.919660] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b6fea00 00:10:20.328 [2024-06-10 10:14:25.919676] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.328 BaseBdev3 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:20.585 10:14:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.842 10:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.842 [ 00:10:20.842 { 00:10:20.842 "name": "BaseBdev3", 00:10:20.842 "aliases": [ 00:10:20.842 "371b226c-2712-11ef-b084-113036b5c18d" 00:10:20.842 ], 00:10:20.842 "product_name": "Malloc disk", 00:10:20.842 "block_size": 512, 00:10:20.842 "num_blocks": 65536, 00:10:20.842 "uuid": "371b226c-2712-11ef-b084-113036b5c18d", 00:10:20.842 "assigned_rate_limits": { 00:10:20.842 "rw_ios_per_sec": 0, 00:10:20.842 "rw_mbytes_per_sec": 0, 00:10:20.842 "r_mbytes_per_sec": 0, 00:10:20.842 "w_mbytes_per_sec": 0 00:10:20.842 }, 00:10:20.842 "claimed": true, 00:10:20.843 "claim_type": "exclusive_write", 00:10:20.843 "zoned": false, 00:10:20.843 "supported_io_types": { 00:10:20.843 "read": true, 00:10:20.843 "write": true, 00:10:20.843 "unmap": true, 00:10:20.843 "write_zeroes": true, 00:10:20.843 "flush": true, 00:10:20.843 "reset": true, 00:10:20.843 "compare": false, 00:10:20.843 "compare_and_write": false, 00:10:20.843 "abort": true, 00:10:20.843 "nvme_admin": false, 00:10:20.843 "nvme_io": false 00:10:20.843 }, 00:10:20.843 "memory_domains": [ 00:10:20.843 { 00:10:20.843 "dma_device_id": "system", 00:10:20.843 "dma_device_type": 1 00:10:20.843 }, 00:10:20.843 { 00:10:20.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.843 "dma_device_type": 2 00:10:20.843 } 00:10:20.843 ], 00:10:20.843 "driver_specific": {} 00:10:20.843 } 00:10:20.843 ] 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.843 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.099 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.099 "name": "Existed_Raid", 00:10:21.099 "uuid": "35fa6ab1-2712-11ef-b084-113036b5c18d", 00:10:21.099 "strip_size_kb": 64, 00:10:21.099 "state": "online", 00:10:21.099 "raid_level": "concat", 00:10:21.099 "superblock": true, 00:10:21.099 "num_base_bdevs": 3, 00:10:21.099 "num_base_bdevs_discovered": 3, 00:10:21.099 "num_base_bdevs_operational": 3, 00:10:21.099 "base_bdevs_list": [ 00:10:21.099 { 00:10:21.099 "name": "BaseBdev1", 00:10:21.099 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:21.099 "is_configured": true, 00:10:21.099 "data_offset": 2048, 00:10:21.099 "data_size": 63488 00:10:21.099 }, 00:10:21.099 { 00:10:21.099 "name": "BaseBdev2", 00:10:21.099 "uuid": "3665da8c-2712-11ef-b084-113036b5c18d", 00:10:21.099 "is_configured": true, 00:10:21.099 "data_offset": 2048, 00:10:21.099 "data_size": 63488 00:10:21.099 }, 00:10:21.099 { 00:10:21.099 "name": "BaseBdev3", 00:10:21.099 "uuid": "371b226c-2712-11ef-b084-113036b5c18d", 00:10:21.099 "is_configured": true, 00:10:21.099 "data_offset": 2048, 00:10:21.099 "data_size": 63488 00:10:21.099 } 00:10:21.099 ] 00:10:21.099 }' 00:10:21.099 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.099 10:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:21.662 10:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:21.662 [2024-06-10 10:14:27.203551] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.662 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:21.662 "name": "Existed_Raid", 00:10:21.662 "aliases": [ 00:10:21.662 "35fa6ab1-2712-11ef-b084-113036b5c18d" 00:10:21.662 ], 00:10:21.662 "product_name": "Raid Volume", 00:10:21.662 "block_size": 512, 00:10:21.662 "num_blocks": 190464, 00:10:21.662 "uuid": "35fa6ab1-2712-11ef-b084-113036b5c18d", 00:10:21.662 "assigned_rate_limits": { 00:10:21.662 "rw_ios_per_sec": 0, 00:10:21.662 "rw_mbytes_per_sec": 0, 00:10:21.662 "r_mbytes_per_sec": 0, 00:10:21.662 "w_mbytes_per_sec": 0 00:10:21.662 }, 00:10:21.662 "claimed": false, 00:10:21.662 "zoned": false, 00:10:21.662 "supported_io_types": { 00:10:21.662 "read": true, 00:10:21.662 "write": true, 00:10:21.662 "unmap": true, 00:10:21.662 "write_zeroes": true, 00:10:21.662 "flush": true, 00:10:21.662 "reset": true, 00:10:21.662 "compare": false, 00:10:21.662 "compare_and_write": false, 00:10:21.662 "abort": false, 00:10:21.662 "nvme_admin": false, 00:10:21.662 "nvme_io": false 00:10:21.662 }, 00:10:21.662 "memory_domains": [ 00:10:21.662 { 00:10:21.662 "dma_device_id": "system", 00:10:21.662 "dma_device_type": 1 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.662 "dma_device_type": 2 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "dma_device_id": "system", 00:10:21.662 "dma_device_type": 1 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.662 "dma_device_type": 2 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "dma_device_id": "system", 00:10:21.662 "dma_device_type": 1 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.662 "dma_device_type": 2 00:10:21.662 } 00:10:21.662 ], 00:10:21.662 "driver_specific": { 00:10:21.662 "raid": { 00:10:21.662 "uuid": "35fa6ab1-2712-11ef-b084-113036b5c18d", 00:10:21.662 "strip_size_kb": 64, 00:10:21.662 "state": "online", 00:10:21.662 "raid_level": "concat", 00:10:21.662 "superblock": true, 00:10:21.662 "num_base_bdevs": 3, 00:10:21.662 "num_base_bdevs_discovered": 3, 00:10:21.662 "num_base_bdevs_operational": 3, 00:10:21.662 "base_bdevs_list": [ 00:10:21.662 { 00:10:21.662 "name": "BaseBdev1", 00:10:21.662 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:21.662 "is_configured": true, 00:10:21.662 "data_offset": 2048, 00:10:21.662 "data_size": 63488 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "name": "BaseBdev2", 00:10:21.662 "uuid": "3665da8c-2712-11ef-b084-113036b5c18d", 00:10:21.662 "is_configured": true, 00:10:21.662 "data_offset": 2048, 00:10:21.662 "data_size": 63488 00:10:21.662 }, 00:10:21.662 { 00:10:21.662 "name": "BaseBdev3", 00:10:21.662 "uuid": "371b226c-2712-11ef-b084-113036b5c18d", 00:10:21.662 "is_configured": true, 00:10:21.662 "data_offset": 2048, 00:10:21.662 "data_size": 63488 00:10:21.662 } 00:10:21.662 ] 00:10:21.662 } 00:10:21.662 } 00:10:21.662 }' 00:10:21.662 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.662 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:21.662 BaseBdev2 00:10:21.662 BaseBdev3' 00:10:21.662 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:21.662 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:21.662 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:21.920 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:21.920 "name": "BaseBdev1", 00:10:21.920 "aliases": [ 00:10:21.920 "3506e136-2712-11ef-b084-113036b5c18d" 00:10:21.920 ], 00:10:21.920 "product_name": "Malloc disk", 00:10:21.920 "block_size": 512, 00:10:21.920 "num_blocks": 65536, 00:10:21.920 "uuid": "3506e136-2712-11ef-b084-113036b5c18d", 00:10:21.920 "assigned_rate_limits": { 00:10:21.920 "rw_ios_per_sec": 0, 00:10:21.920 "rw_mbytes_per_sec": 0, 00:10:21.920 "r_mbytes_per_sec": 0, 00:10:21.920 "w_mbytes_per_sec": 0 00:10:21.920 }, 00:10:21.920 "claimed": true, 00:10:21.920 "claim_type": "exclusive_write", 00:10:21.920 "zoned": false, 00:10:21.920 "supported_io_types": { 00:10:21.920 "read": true, 00:10:21.920 "write": true, 00:10:21.920 "unmap": true, 00:10:21.920 "write_zeroes": true, 00:10:21.920 "flush": true, 00:10:21.920 "reset": true, 00:10:21.920 "compare": false, 00:10:21.920 "compare_and_write": false, 00:10:21.920 "abort": true, 00:10:21.920 "nvme_admin": false, 00:10:21.920 "nvme_io": false 00:10:21.920 }, 00:10:21.920 "memory_domains": [ 00:10:21.920 { 00:10:21.920 "dma_device_id": "system", 00:10:21.920 "dma_device_type": 1 00:10:21.920 }, 00:10:21.920 { 00:10:21.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.920 "dma_device_type": 2 00:10:21.920 } 00:10:21.920 ], 00:10:21.920 "driver_specific": {} 00:10:21.920 }' 00:10:21.920 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:22.178 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.436 "name": "BaseBdev2", 00:10:22.436 "aliases": [ 00:10:22.436 "3665da8c-2712-11ef-b084-113036b5c18d" 00:10:22.436 ], 00:10:22.436 "product_name": "Malloc disk", 00:10:22.436 "block_size": 512, 00:10:22.436 "num_blocks": 65536, 00:10:22.436 "uuid": "3665da8c-2712-11ef-b084-113036b5c18d", 00:10:22.436 "assigned_rate_limits": { 00:10:22.436 "rw_ios_per_sec": 0, 00:10:22.436 "rw_mbytes_per_sec": 0, 00:10:22.436 "r_mbytes_per_sec": 0, 00:10:22.436 "w_mbytes_per_sec": 0 00:10:22.436 }, 00:10:22.436 "claimed": true, 00:10:22.436 "claim_type": "exclusive_write", 00:10:22.436 "zoned": false, 00:10:22.436 "supported_io_types": { 00:10:22.436 "read": true, 00:10:22.436 "write": true, 00:10:22.436 "unmap": true, 00:10:22.436 "write_zeroes": true, 00:10:22.436 "flush": true, 00:10:22.436 "reset": true, 00:10:22.436 "compare": false, 00:10:22.436 "compare_and_write": false, 00:10:22.436 "abort": true, 00:10:22.436 "nvme_admin": false, 00:10:22.436 "nvme_io": false 00:10:22.436 }, 00:10:22.436 "memory_domains": [ 00:10:22.436 { 00:10:22.436 "dma_device_id": "system", 00:10:22.436 "dma_device_type": 1 00:10:22.436 }, 00:10:22.436 { 00:10:22.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.436 "dma_device_type": 2 00:10:22.436 } 00:10:22.436 ], 00:10:22.436 "driver_specific": {} 00:10:22.436 }' 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:22.436 10:14:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:22.710 "name": "BaseBdev3", 00:10:22.710 "aliases": [ 00:10:22.710 "371b226c-2712-11ef-b084-113036b5c18d" 00:10:22.710 ], 00:10:22.710 "product_name": "Malloc disk", 00:10:22.710 "block_size": 512, 00:10:22.710 "num_blocks": 65536, 00:10:22.710 "uuid": "371b226c-2712-11ef-b084-113036b5c18d", 00:10:22.710 "assigned_rate_limits": { 00:10:22.710 "rw_ios_per_sec": 0, 00:10:22.710 "rw_mbytes_per_sec": 0, 00:10:22.710 "r_mbytes_per_sec": 0, 00:10:22.710 "w_mbytes_per_sec": 0 00:10:22.710 }, 00:10:22.710 "claimed": true, 00:10:22.710 "claim_type": "exclusive_write", 00:10:22.710 "zoned": false, 00:10:22.710 "supported_io_types": { 00:10:22.710 "read": true, 00:10:22.710 "write": true, 00:10:22.710 "unmap": true, 00:10:22.710 "write_zeroes": true, 00:10:22.710 "flush": true, 00:10:22.710 "reset": true, 00:10:22.710 "compare": false, 00:10:22.710 "compare_and_write": false, 00:10:22.710 "abort": true, 00:10:22.710 "nvme_admin": false, 00:10:22.710 "nvme_io": false 00:10:22.710 }, 00:10:22.710 "memory_domains": [ 00:10:22.710 { 00:10:22.710 "dma_device_id": "system", 00:10:22.710 "dma_device_type": 1 00:10:22.710 }, 00:10:22.710 { 00:10:22.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.710 "dma_device_type": 2 00:10:22.710 } 00:10:22.710 ], 00:10:22.710 "driver_specific": {} 00:10:22.710 }' 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:22.710 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:22.969 [2024-06-10 10:14:28.555543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:22.969 [2024-06-10 10:14:28.555573] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.969 [2024-06-10 10:14:28.555586] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.229 "name": "Existed_Raid", 00:10:23.229 "uuid": "35fa6ab1-2712-11ef-b084-113036b5c18d", 00:10:23.229 "strip_size_kb": 64, 00:10:23.229 "state": "offline", 00:10:23.229 "raid_level": "concat", 00:10:23.229 "superblock": true, 00:10:23.229 "num_base_bdevs": 3, 00:10:23.229 "num_base_bdevs_discovered": 2, 00:10:23.229 "num_base_bdevs_operational": 2, 00:10:23.229 "base_bdevs_list": [ 00:10:23.229 { 00:10:23.229 "name": null, 00:10:23.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.229 "is_configured": false, 00:10:23.229 "data_offset": 2048, 00:10:23.229 "data_size": 63488 00:10:23.229 }, 00:10:23.229 { 00:10:23.229 "name": "BaseBdev2", 00:10:23.229 "uuid": "3665da8c-2712-11ef-b084-113036b5c18d", 00:10:23.229 "is_configured": true, 00:10:23.229 "data_offset": 2048, 00:10:23.229 "data_size": 63488 00:10:23.229 }, 00:10:23.229 { 00:10:23.229 "name": "BaseBdev3", 00:10:23.229 "uuid": "371b226c-2712-11ef-b084-113036b5c18d", 00:10:23.229 "is_configured": true, 00:10:23.229 "data_offset": 2048, 00:10:23.229 "data_size": 63488 00:10:23.229 } 00:10:23.229 ] 00:10:23.229 }' 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.229 10:14:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:23.798 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:24.364 [2024-06-10 10:14:29.676425] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.364 10:14:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:24.622 [2024-06-10 10:14:30.177204] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.622 [2024-06-10 10:14:30.177234] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6fea00 name Existed_Raid, state offline 00:10:24.622 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:24.622 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:24.622 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.622 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.881 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:24.881 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:24.881 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:24.881 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:24.881 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:24.881 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.141 BaseBdev2 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:25.141 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.399 10:14:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.659 [ 00:10:25.659 { 00:10:25.659 "name": "BaseBdev2", 00:10:25.659 "aliases": [ 00:10:25.659 "39ebbfb6-2712-11ef-b084-113036b5c18d" 00:10:25.659 ], 00:10:25.659 "product_name": "Malloc disk", 00:10:25.659 "block_size": 512, 00:10:25.659 "num_blocks": 65536, 00:10:25.659 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:25.659 "assigned_rate_limits": { 00:10:25.659 "rw_ios_per_sec": 0, 00:10:25.659 "rw_mbytes_per_sec": 0, 00:10:25.659 "r_mbytes_per_sec": 0, 00:10:25.659 "w_mbytes_per_sec": 0 00:10:25.659 }, 00:10:25.659 "claimed": false, 00:10:25.659 "zoned": false, 00:10:25.659 "supported_io_types": { 00:10:25.659 "read": true, 00:10:25.659 "write": true, 00:10:25.659 "unmap": true, 00:10:25.659 "write_zeroes": true, 00:10:25.659 "flush": true, 00:10:25.659 "reset": true, 00:10:25.659 "compare": false, 00:10:25.659 "compare_and_write": false, 00:10:25.659 "abort": true, 00:10:25.659 "nvme_admin": false, 00:10:25.659 "nvme_io": false 00:10:25.659 }, 00:10:25.659 "memory_domains": [ 00:10:25.659 { 00:10:25.659 "dma_device_id": "system", 00:10:25.659 "dma_device_type": 1 00:10:25.659 }, 00:10:25.659 { 00:10:25.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.659 "dma_device_type": 2 00:10:25.659 } 00:10:25.659 ], 00:10:25.659 "driver_specific": {} 00:10:25.659 } 00:10:25.659 ] 00:10:25.659 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:25.659 10:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:25.659 10:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:25.659 10:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:25.933 BaseBdev3 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:25.933 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:26.501 10:14:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.501 [ 00:10:26.501 { 00:10:26.501 "name": "BaseBdev3", 00:10:26.501 "aliases": [ 00:10:26.501 "3a68e134-2712-11ef-b084-113036b5c18d" 00:10:26.501 ], 00:10:26.501 "product_name": "Malloc disk", 00:10:26.501 "block_size": 512, 00:10:26.501 "num_blocks": 65536, 00:10:26.501 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:26.501 "assigned_rate_limits": { 00:10:26.501 "rw_ios_per_sec": 0, 00:10:26.501 "rw_mbytes_per_sec": 0, 00:10:26.501 "r_mbytes_per_sec": 0, 00:10:26.501 "w_mbytes_per_sec": 0 00:10:26.501 }, 00:10:26.501 "claimed": false, 00:10:26.501 "zoned": false, 00:10:26.501 "supported_io_types": { 00:10:26.501 "read": true, 00:10:26.501 "write": true, 00:10:26.501 "unmap": true, 00:10:26.501 "write_zeroes": true, 00:10:26.501 "flush": true, 00:10:26.501 "reset": true, 00:10:26.501 "compare": false, 00:10:26.501 "compare_and_write": false, 00:10:26.501 "abort": true, 00:10:26.501 "nvme_admin": false, 00:10:26.501 "nvme_io": false 00:10:26.501 }, 00:10:26.501 "memory_domains": [ 00:10:26.501 { 00:10:26.501 "dma_device_id": "system", 00:10:26.501 "dma_device_type": 1 00:10:26.501 }, 00:10:26.501 { 00:10:26.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.501 "dma_device_type": 2 00:10:26.501 } 00:10:26.501 ], 00:10:26.501 "driver_specific": {} 00:10:26.501 } 00:10:26.501 ] 00:10:26.501 10:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:26.501 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:26.501 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:26.501 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:26.759 [2024-06-10 10:14:32.282140] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.759 [2024-06-10 10:14:32.282186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.759 [2024-06-10 10:14:32.282194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.759 [2024-06-10 10:14:32.282647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.759 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.017 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.017 "name": "Existed_Raid", 00:10:27.017 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:27.017 "strip_size_kb": 64, 00:10:27.017 "state": "configuring", 00:10:27.017 "raid_level": "concat", 00:10:27.017 "superblock": true, 00:10:27.017 "num_base_bdevs": 3, 00:10:27.017 "num_base_bdevs_discovered": 2, 00:10:27.017 "num_base_bdevs_operational": 3, 00:10:27.017 "base_bdevs_list": [ 00:10:27.017 { 00:10:27.017 "name": "BaseBdev1", 00:10:27.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.017 "is_configured": false, 00:10:27.017 "data_offset": 0, 00:10:27.017 "data_size": 0 00:10:27.017 }, 00:10:27.017 { 00:10:27.017 "name": "BaseBdev2", 00:10:27.017 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:27.017 "is_configured": true, 00:10:27.017 "data_offset": 2048, 00:10:27.017 "data_size": 63488 00:10:27.017 }, 00:10:27.017 { 00:10:27.017 "name": "BaseBdev3", 00:10:27.017 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:27.017 "is_configured": true, 00:10:27.017 "data_offset": 2048, 00:10:27.017 "data_size": 63488 00:10:27.017 } 00:10:27.017 ] 00:10:27.017 }' 00:10:27.017 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.017 10:14:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.585 10:14:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:27.585 [2024-06-10 10:14:33.138172] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.585 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.843 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.843 "name": "Existed_Raid", 00:10:27.843 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:27.843 "strip_size_kb": 64, 00:10:27.843 "state": "configuring", 00:10:27.843 "raid_level": "concat", 00:10:27.843 "superblock": true, 00:10:27.843 "num_base_bdevs": 3, 00:10:27.843 "num_base_bdevs_discovered": 1, 00:10:27.843 "num_base_bdevs_operational": 3, 00:10:27.843 "base_bdevs_list": [ 00:10:27.843 { 00:10:27.843 "name": "BaseBdev1", 00:10:27.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.843 "is_configured": false, 00:10:27.843 "data_offset": 0, 00:10:27.843 "data_size": 0 00:10:27.843 }, 00:10:27.843 { 00:10:27.843 "name": null, 00:10:27.843 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:27.843 "is_configured": false, 00:10:27.843 "data_offset": 2048, 00:10:27.843 "data_size": 63488 00:10:27.843 }, 00:10:27.843 { 00:10:27.843 "name": "BaseBdev3", 00:10:27.843 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:27.843 "is_configured": true, 00:10:27.843 "data_offset": 2048, 00:10:27.843 "data_size": 63488 00:10:27.843 } 00:10:27.843 ] 00:10:27.843 }' 00:10:27.843 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.843 10:14:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.102 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.102 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.360 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:28.360 10:14:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:28.618 [2024-06-10 10:14:34.138332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.618 BaseBdev1 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:28.618 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.184 [ 00:10:29.184 { 00:10:29.184 "name": "BaseBdev1", 00:10:29.184 "aliases": [ 00:10:29.184 "3c0138a7-2712-11ef-b084-113036b5c18d" 00:10:29.184 ], 00:10:29.184 "product_name": "Malloc disk", 00:10:29.184 "block_size": 512, 00:10:29.184 "num_blocks": 65536, 00:10:29.184 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:29.184 "assigned_rate_limits": { 00:10:29.184 "rw_ios_per_sec": 0, 00:10:29.184 "rw_mbytes_per_sec": 0, 00:10:29.184 "r_mbytes_per_sec": 0, 00:10:29.184 "w_mbytes_per_sec": 0 00:10:29.184 }, 00:10:29.184 "claimed": true, 00:10:29.184 "claim_type": "exclusive_write", 00:10:29.184 "zoned": false, 00:10:29.184 "supported_io_types": { 00:10:29.184 "read": true, 00:10:29.184 "write": true, 00:10:29.184 "unmap": true, 00:10:29.184 "write_zeroes": true, 00:10:29.184 "flush": true, 00:10:29.184 "reset": true, 00:10:29.184 "compare": false, 00:10:29.184 "compare_and_write": false, 00:10:29.184 "abort": true, 00:10:29.184 "nvme_admin": false, 00:10:29.184 "nvme_io": false 00:10:29.184 }, 00:10:29.184 "memory_domains": [ 00:10:29.184 { 00:10:29.184 "dma_device_id": "system", 00:10:29.184 "dma_device_type": 1 00:10:29.184 }, 00:10:29.184 { 00:10:29.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.184 "dma_device_type": 2 00:10:29.184 } 00:10:29.184 ], 00:10:29.184 "driver_specific": {} 00:10:29.184 } 00:10:29.184 ] 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.184 10:14:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.750 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.750 "name": "Existed_Raid", 00:10:29.750 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:29.750 "strip_size_kb": 64, 00:10:29.750 "state": "configuring", 00:10:29.750 "raid_level": "concat", 00:10:29.750 "superblock": true, 00:10:29.750 "num_base_bdevs": 3, 00:10:29.750 "num_base_bdevs_discovered": 2, 00:10:29.750 "num_base_bdevs_operational": 3, 00:10:29.750 "base_bdevs_list": [ 00:10:29.750 { 00:10:29.750 "name": "BaseBdev1", 00:10:29.750 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:29.750 "is_configured": true, 00:10:29.750 "data_offset": 2048, 00:10:29.750 "data_size": 63488 00:10:29.750 }, 00:10:29.750 { 00:10:29.750 "name": null, 00:10:29.750 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:29.750 "is_configured": false, 00:10:29.750 "data_offset": 2048, 00:10:29.750 "data_size": 63488 00:10:29.750 }, 00:10:29.750 { 00:10:29.750 "name": "BaseBdev3", 00:10:29.750 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:29.750 "is_configured": true, 00:10:29.750 "data_offset": 2048, 00:10:29.750 "data_size": 63488 00:10:29.750 } 00:10:29.750 ] 00:10:29.750 }' 00:10:29.750 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.750 10:14:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.008 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.008 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.266 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:30.266 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:30.524 [2024-06-10 10:14:35.886284] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.524 10:14:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.782 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:30.782 "name": "Existed_Raid", 00:10:30.782 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:30.782 "strip_size_kb": 64, 00:10:30.782 "state": "configuring", 00:10:30.782 "raid_level": "concat", 00:10:30.782 "superblock": true, 00:10:30.782 "num_base_bdevs": 3, 00:10:30.782 "num_base_bdevs_discovered": 1, 00:10:30.782 "num_base_bdevs_operational": 3, 00:10:30.782 "base_bdevs_list": [ 00:10:30.782 { 00:10:30.782 "name": "BaseBdev1", 00:10:30.782 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:30.782 "is_configured": true, 00:10:30.782 "data_offset": 2048, 00:10:30.782 "data_size": 63488 00:10:30.782 }, 00:10:30.782 { 00:10:30.782 "name": null, 00:10:30.782 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:30.782 "is_configured": false, 00:10:30.782 "data_offset": 2048, 00:10:30.782 "data_size": 63488 00:10:30.782 }, 00:10:30.782 { 00:10:30.782 "name": null, 00:10:30.782 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:30.782 "is_configured": false, 00:10:30.782 "data_offset": 2048, 00:10:30.782 "data_size": 63488 00:10:30.782 } 00:10:30.782 ] 00:10:30.782 }' 00:10:30.782 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:30.782 10:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.040 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.040 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.297 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:31.297 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:31.555 [2024-06-10 10:14:36.950370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.555 10:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.812 10:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:31.812 "name": "Existed_Raid", 00:10:31.812 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:31.812 "strip_size_kb": 64, 00:10:31.812 "state": "configuring", 00:10:31.812 "raid_level": "concat", 00:10:31.812 "superblock": true, 00:10:31.812 "num_base_bdevs": 3, 00:10:31.812 "num_base_bdevs_discovered": 2, 00:10:31.812 "num_base_bdevs_operational": 3, 00:10:31.812 "base_bdevs_list": [ 00:10:31.812 { 00:10:31.812 "name": "BaseBdev1", 00:10:31.812 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:31.812 "is_configured": true, 00:10:31.812 "data_offset": 2048, 00:10:31.812 "data_size": 63488 00:10:31.812 }, 00:10:31.812 { 00:10:31.813 "name": null, 00:10:31.813 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:31.813 "is_configured": false, 00:10:31.813 "data_offset": 2048, 00:10:31.813 "data_size": 63488 00:10:31.813 }, 00:10:31.813 { 00:10:31.813 "name": "BaseBdev3", 00:10:31.813 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:31.813 "is_configured": true, 00:10:31.813 "data_offset": 2048, 00:10:31.813 "data_size": 63488 00:10:31.813 } 00:10:31.813 ] 00:10:31.813 }' 00:10:31.813 10:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:31.813 10:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.070 10:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.070 10:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.327 10:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:32.327 10:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:32.584 [2024-06-10 10:14:38.182432] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.842 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.100 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:33.100 "name": "Existed_Raid", 00:10:33.100 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:33.100 "strip_size_kb": 64, 00:10:33.100 "state": "configuring", 00:10:33.100 "raid_level": "concat", 00:10:33.100 "superblock": true, 00:10:33.100 "num_base_bdevs": 3, 00:10:33.100 "num_base_bdevs_discovered": 1, 00:10:33.100 "num_base_bdevs_operational": 3, 00:10:33.100 "base_bdevs_list": [ 00:10:33.100 { 00:10:33.100 "name": null, 00:10:33.100 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:33.100 "is_configured": false, 00:10:33.100 "data_offset": 2048, 00:10:33.100 "data_size": 63488 00:10:33.100 }, 00:10:33.100 { 00:10:33.100 "name": null, 00:10:33.100 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:33.100 "is_configured": false, 00:10:33.100 "data_offset": 2048, 00:10:33.100 "data_size": 63488 00:10:33.100 }, 00:10:33.100 { 00:10:33.100 "name": "BaseBdev3", 00:10:33.100 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:33.100 "is_configured": true, 00:10:33.100 "data_offset": 2048, 00:10:33.100 "data_size": 63488 00:10:33.100 } 00:10:33.100 ] 00:10:33.100 }' 00:10:33.100 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:33.100 10:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.358 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.358 10:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.668 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:33.668 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.928 [2024-06-10 10:14:39.375200] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.928 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.188 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.188 "name": "Existed_Raid", 00:10:34.188 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:34.188 "strip_size_kb": 64, 00:10:34.188 "state": "configuring", 00:10:34.188 "raid_level": "concat", 00:10:34.188 "superblock": true, 00:10:34.188 "num_base_bdevs": 3, 00:10:34.188 "num_base_bdevs_discovered": 2, 00:10:34.188 "num_base_bdevs_operational": 3, 00:10:34.188 "base_bdevs_list": [ 00:10:34.188 { 00:10:34.188 "name": null, 00:10:34.188 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:34.188 "is_configured": false, 00:10:34.188 "data_offset": 2048, 00:10:34.188 "data_size": 63488 00:10:34.188 }, 00:10:34.188 { 00:10:34.188 "name": "BaseBdev2", 00:10:34.188 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:34.188 "is_configured": true, 00:10:34.188 "data_offset": 2048, 00:10:34.188 "data_size": 63488 00:10:34.188 }, 00:10:34.188 { 00:10:34.188 "name": "BaseBdev3", 00:10:34.188 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:34.188 "is_configured": true, 00:10:34.188 "data_offset": 2048, 00:10:34.188 "data_size": 63488 00:10:34.188 } 00:10:34.188 ] 00:10:34.188 }' 00:10:34.188 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.188 10:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.447 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.447 10:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.705 10:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:34.705 10:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.705 10:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.705 10:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3c0138a7-2712-11ef-b084-113036b5c18d 00:10:34.963 [2024-06-10 10:14:40.459329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:34.963 [2024-06-10 10:14:40.459374] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b6fea00 00:10:34.963 [2024-06-10 10:14:40.459394] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:34.963 [2024-06-10 10:14:40.459412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b761e20 00:10:34.963 [2024-06-10 10:14:40.459444] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b6fea00 00:10:34.963 [2024-06-10 10:14:40.459447] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b6fea00 00:10:34.963 [2024-06-10 10:14:40.459463] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.963 NewBaseBdev 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:10:34.963 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.222 10:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:35.480 [ 00:10:35.480 { 00:10:35.480 "name": "NewBaseBdev", 00:10:35.480 "aliases": [ 00:10:35.480 "3c0138a7-2712-11ef-b084-113036b5c18d" 00:10:35.480 ], 00:10:35.480 "product_name": "Malloc disk", 00:10:35.480 "block_size": 512, 00:10:35.480 "num_blocks": 65536, 00:10:35.480 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:35.480 "assigned_rate_limits": { 00:10:35.480 "rw_ios_per_sec": 0, 00:10:35.480 "rw_mbytes_per_sec": 0, 00:10:35.480 "r_mbytes_per_sec": 0, 00:10:35.480 "w_mbytes_per_sec": 0 00:10:35.480 }, 00:10:35.480 "claimed": true, 00:10:35.480 "claim_type": "exclusive_write", 00:10:35.480 "zoned": false, 00:10:35.480 "supported_io_types": { 00:10:35.480 "read": true, 00:10:35.480 "write": true, 00:10:35.480 "unmap": true, 00:10:35.480 "write_zeroes": true, 00:10:35.480 "flush": true, 00:10:35.480 "reset": true, 00:10:35.480 "compare": false, 00:10:35.480 "compare_and_write": false, 00:10:35.480 "abort": true, 00:10:35.480 "nvme_admin": false, 00:10:35.480 "nvme_io": false 00:10:35.480 }, 00:10:35.480 "memory_domains": [ 00:10:35.480 { 00:10:35.480 "dma_device_id": "system", 00:10:35.480 "dma_device_type": 1 00:10:35.480 }, 00:10:35.480 { 00:10:35.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.480 "dma_device_type": 2 00:10:35.480 } 00:10:35.480 ], 00:10:35.480 "driver_specific": {} 00:10:35.480 } 00:10:35.480 ] 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.480 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.738 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:35.738 "name": "Existed_Raid", 00:10:35.738 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:35.738 "strip_size_kb": 64, 00:10:35.738 "state": "online", 00:10:35.738 "raid_level": "concat", 00:10:35.738 "superblock": true, 00:10:35.739 "num_base_bdevs": 3, 00:10:35.739 "num_base_bdevs_discovered": 3, 00:10:35.739 "num_base_bdevs_operational": 3, 00:10:35.739 "base_bdevs_list": [ 00:10:35.739 { 00:10:35.739 "name": "NewBaseBdev", 00:10:35.739 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:35.739 "is_configured": true, 00:10:35.739 "data_offset": 2048, 00:10:35.739 "data_size": 63488 00:10:35.739 }, 00:10:35.739 { 00:10:35.739 "name": "BaseBdev2", 00:10:35.739 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:35.739 "is_configured": true, 00:10:35.739 "data_offset": 2048, 00:10:35.739 "data_size": 63488 00:10:35.739 }, 00:10:35.739 { 00:10:35.739 "name": "BaseBdev3", 00:10:35.739 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:35.739 "is_configured": true, 00:10:35.739 "data_offset": 2048, 00:10:35.739 "data_size": 63488 00:10:35.739 } 00:10:35.739 ] 00:10:35.739 }' 00:10:35.739 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:35.739 10:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:36.305 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:36.564 [2024-06-10 10:14:41.911363] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.564 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:36.564 "name": "Existed_Raid", 00:10:36.564 "aliases": [ 00:10:36.564 "3ae600e6-2712-11ef-b084-113036b5c18d" 00:10:36.564 ], 00:10:36.564 "product_name": "Raid Volume", 00:10:36.564 "block_size": 512, 00:10:36.564 "num_blocks": 190464, 00:10:36.564 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:36.564 "assigned_rate_limits": { 00:10:36.564 "rw_ios_per_sec": 0, 00:10:36.564 "rw_mbytes_per_sec": 0, 00:10:36.564 "r_mbytes_per_sec": 0, 00:10:36.564 "w_mbytes_per_sec": 0 00:10:36.564 }, 00:10:36.564 "claimed": false, 00:10:36.564 "zoned": false, 00:10:36.564 "supported_io_types": { 00:10:36.564 "read": true, 00:10:36.564 "write": true, 00:10:36.564 "unmap": true, 00:10:36.564 "write_zeroes": true, 00:10:36.564 "flush": true, 00:10:36.564 "reset": true, 00:10:36.564 "compare": false, 00:10:36.564 "compare_and_write": false, 00:10:36.564 "abort": false, 00:10:36.564 "nvme_admin": false, 00:10:36.564 "nvme_io": false 00:10:36.564 }, 00:10:36.564 "memory_domains": [ 00:10:36.564 { 00:10:36.564 "dma_device_id": "system", 00:10:36.564 "dma_device_type": 1 00:10:36.564 }, 00:10:36.564 { 00:10:36.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.564 "dma_device_type": 2 00:10:36.564 }, 00:10:36.564 { 00:10:36.564 "dma_device_id": "system", 00:10:36.564 "dma_device_type": 1 00:10:36.564 }, 00:10:36.564 { 00:10:36.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.564 "dma_device_type": 2 00:10:36.564 }, 00:10:36.564 { 00:10:36.564 "dma_device_id": "system", 00:10:36.564 "dma_device_type": 1 00:10:36.564 }, 00:10:36.564 { 00:10:36.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.564 "dma_device_type": 2 00:10:36.564 } 00:10:36.564 ], 00:10:36.564 "driver_specific": { 00:10:36.564 "raid": { 00:10:36.564 "uuid": "3ae600e6-2712-11ef-b084-113036b5c18d", 00:10:36.564 "strip_size_kb": 64, 00:10:36.564 "state": "online", 00:10:36.564 "raid_level": "concat", 00:10:36.564 "superblock": true, 00:10:36.564 "num_base_bdevs": 3, 00:10:36.564 "num_base_bdevs_discovered": 3, 00:10:36.564 "num_base_bdevs_operational": 3, 00:10:36.564 "base_bdevs_list": [ 00:10:36.564 { 00:10:36.564 "name": "NewBaseBdev", 00:10:36.564 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:36.564 "is_configured": true, 00:10:36.564 "data_offset": 2048, 00:10:36.564 "data_size": 63488 00:10:36.564 }, 00:10:36.564 { 00:10:36.564 "name": "BaseBdev2", 00:10:36.564 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:36.564 "is_configured": true, 00:10:36.564 "data_offset": 2048, 00:10:36.564 "data_size": 63488 00:10:36.564 }, 00:10:36.564 { 00:10:36.565 "name": "BaseBdev3", 00:10:36.565 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:36.565 "is_configured": true, 00:10:36.565 "data_offset": 2048, 00:10:36.565 "data_size": 63488 00:10:36.565 } 00:10:36.565 ] 00:10:36.565 } 00:10:36.565 } 00:10:36.565 }' 00:10:36.565 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.565 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:36.565 BaseBdev2 00:10:36.565 BaseBdev3' 00:10:36.565 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:36.565 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:36.565 10:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:36.823 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:36.823 "name": "NewBaseBdev", 00:10:36.823 "aliases": [ 00:10:36.823 "3c0138a7-2712-11ef-b084-113036b5c18d" 00:10:36.823 ], 00:10:36.823 "product_name": "Malloc disk", 00:10:36.823 "block_size": 512, 00:10:36.823 "num_blocks": 65536, 00:10:36.823 "uuid": "3c0138a7-2712-11ef-b084-113036b5c18d", 00:10:36.823 "assigned_rate_limits": { 00:10:36.823 "rw_ios_per_sec": 0, 00:10:36.823 "rw_mbytes_per_sec": 0, 00:10:36.823 "r_mbytes_per_sec": 0, 00:10:36.823 "w_mbytes_per_sec": 0 00:10:36.823 }, 00:10:36.823 "claimed": true, 00:10:36.823 "claim_type": "exclusive_write", 00:10:36.823 "zoned": false, 00:10:36.823 "supported_io_types": { 00:10:36.823 "read": true, 00:10:36.823 "write": true, 00:10:36.823 "unmap": true, 00:10:36.823 "write_zeroes": true, 00:10:36.823 "flush": true, 00:10:36.823 "reset": true, 00:10:36.823 "compare": false, 00:10:36.823 "compare_and_write": false, 00:10:36.823 "abort": true, 00:10:36.823 "nvme_admin": false, 00:10:36.823 "nvme_io": false 00:10:36.823 }, 00:10:36.823 "memory_domains": [ 00:10:36.823 { 00:10:36.823 "dma_device_id": "system", 00:10:36.823 "dma_device_type": 1 00:10:36.823 }, 00:10:36.823 { 00:10:36.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.823 "dma_device_type": 2 00:10:36.823 } 00:10:36.823 ], 00:10:36.823 "driver_specific": {} 00:10:36.823 }' 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:36.824 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:37.391 "name": "BaseBdev2", 00:10:37.391 "aliases": [ 00:10:37.391 "39ebbfb6-2712-11ef-b084-113036b5c18d" 00:10:37.391 ], 00:10:37.391 "product_name": "Malloc disk", 00:10:37.391 "block_size": 512, 00:10:37.391 "num_blocks": 65536, 00:10:37.391 "uuid": "39ebbfb6-2712-11ef-b084-113036b5c18d", 00:10:37.391 "assigned_rate_limits": { 00:10:37.391 "rw_ios_per_sec": 0, 00:10:37.391 "rw_mbytes_per_sec": 0, 00:10:37.391 "r_mbytes_per_sec": 0, 00:10:37.391 "w_mbytes_per_sec": 0 00:10:37.391 }, 00:10:37.391 "claimed": true, 00:10:37.391 "claim_type": "exclusive_write", 00:10:37.391 "zoned": false, 00:10:37.391 "supported_io_types": { 00:10:37.391 "read": true, 00:10:37.391 "write": true, 00:10:37.391 "unmap": true, 00:10:37.391 "write_zeroes": true, 00:10:37.391 "flush": true, 00:10:37.391 "reset": true, 00:10:37.391 "compare": false, 00:10:37.391 "compare_and_write": false, 00:10:37.391 "abort": true, 00:10:37.391 "nvme_admin": false, 00:10:37.391 "nvme_io": false 00:10:37.391 }, 00:10:37.391 "memory_domains": [ 00:10:37.391 { 00:10:37.391 "dma_device_id": "system", 00:10:37.391 "dma_device_type": 1 00:10:37.391 }, 00:10:37.391 { 00:10:37.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.391 "dma_device_type": 2 00:10:37.391 } 00:10:37.391 ], 00:10:37.391 "driver_specific": {} 00:10:37.391 }' 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:37.391 10:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:37.649 "name": "BaseBdev3", 00:10:37.649 "aliases": [ 00:10:37.649 "3a68e134-2712-11ef-b084-113036b5c18d" 00:10:37.649 ], 00:10:37.649 "product_name": "Malloc disk", 00:10:37.649 "block_size": 512, 00:10:37.649 "num_blocks": 65536, 00:10:37.649 "uuid": "3a68e134-2712-11ef-b084-113036b5c18d", 00:10:37.649 "assigned_rate_limits": { 00:10:37.649 "rw_ios_per_sec": 0, 00:10:37.649 "rw_mbytes_per_sec": 0, 00:10:37.649 "r_mbytes_per_sec": 0, 00:10:37.649 "w_mbytes_per_sec": 0 00:10:37.649 }, 00:10:37.649 "claimed": true, 00:10:37.649 "claim_type": "exclusive_write", 00:10:37.649 "zoned": false, 00:10:37.649 "supported_io_types": { 00:10:37.649 "read": true, 00:10:37.649 "write": true, 00:10:37.649 "unmap": true, 00:10:37.649 "write_zeroes": true, 00:10:37.649 "flush": true, 00:10:37.649 "reset": true, 00:10:37.649 "compare": false, 00:10:37.649 "compare_and_write": false, 00:10:37.649 "abort": true, 00:10:37.649 "nvme_admin": false, 00:10:37.649 "nvme_io": false 00:10:37.649 }, 00:10:37.649 "memory_domains": [ 00:10:37.649 { 00:10:37.649 "dma_device_id": "system", 00:10:37.649 "dma_device_type": 1 00:10:37.649 }, 00:10:37.649 { 00:10:37.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.649 "dma_device_type": 2 00:10:37.649 } 00:10:37.649 ], 00:10:37.649 "driver_specific": {} 00:10:37.649 }' 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:37.649 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:37.909 [2024-06-10 10:14:43.443365] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.909 [2024-06-10 10:14:43.443393] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.909 [2024-06-10 10:14:43.443414] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.909 [2024-06-10 10:14:43.443428] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.909 [2024-06-10 10:14:43.443433] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6fea00 name Existed_Raid, state offline 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 55519 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 55519 ']' 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 55519 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 55519 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:10:37.909 killing process with pid 55519 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 55519' 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 55519 00:10:37.909 [2024-06-10 10:14:43.471840] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.909 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 55519 00:10:37.909 [2024-06-10 10:14:43.486132] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.167 10:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:38.168 00:10:38.168 real 0m23.700s 00:10:38.168 user 0m43.235s 00:10:38.168 sys 0m3.471s 00:10:38.168 ************************************ 00:10:38.168 END TEST raid_state_function_test_sb 00:10:38.168 ************************************ 00:10:38.168 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:38.168 10:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.168 10:14:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:38.168 10:14:43 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:10:38.168 10:14:43 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:38.168 10:14:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.168 ************************************ 00:10:38.168 START TEST raid_superblock_test 00:10:38.168 ************************************ 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 3 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=56243 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 56243 /var/tmp/spdk-raid.sock 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 56243 ']' 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:38.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:38.168 10:14:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.168 [2024-06-10 10:14:43.713291] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:10:38.168 [2024-06-10 10:14:43.713538] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:38.735 EAL: TSC is not safe to use in SMP mode 00:10:38.735 EAL: TSC is not invariant 00:10:38.735 [2024-06-10 10:14:44.200173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.735 [2024-06-10 10:14:44.286526] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:38.735 [2024-06-10 10:14:44.288935] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.735 [2024-06-10 10:14:44.289823] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.735 [2024-06-10 10:14:44.289843] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:39.670 10:14:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:39.670 malloc1 00:10:39.670 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:39.928 [2024-06-10 10:14:45.417601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:39.928 [2024-06-10 10:14:45.417689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.928 [2024-06-10 10:14:45.417704] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5e780 00:10:39.928 [2024-06-10 10:14:45.417731] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.928 [2024-06-10 10:14:45.418606] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.928 [2024-06-10 10:14:45.418652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:39.928 pt1 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:39.928 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:40.494 malloc2 00:10:40.494 10:14:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:40.753 [2024-06-10 10:14:46.261639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:40.753 [2024-06-10 10:14:46.261709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.753 [2024-06-10 10:14:46.261729] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5ec80 00:10:40.753 [2024-06-10 10:14:46.261761] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.753 [2024-06-10 10:14:46.262306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.753 [2024-06-10 10:14:46.262355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:40.753 pt2 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:40.753 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:41.011 malloc3 00:10:41.011 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:41.270 [2024-06-10 10:14:46.717664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:41.270 [2024-06-10 10:14:46.717750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.270 [2024-06-10 10:14:46.717771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5f180 00:10:41.270 [2024-06-10 10:14:46.717786] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.270 [2024-06-10 10:14:46.718384] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.270 [2024-06-10 10:14:46.718429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:41.270 pt3 00:10:41.270 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:41.270 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:41.270 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:41.528 [2024-06-10 10:14:46.957684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:41.528 [2024-06-10 10:14:46.958180] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.529 [2024-06-10 10:14:46.958203] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:41.529 [2024-06-10 10:14:46.958256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bf5f400 00:10:41.529 [2024-06-10 10:14:46.958261] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:41.529 [2024-06-10 10:14:46.958295] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bfc1e20 00:10:41.529 [2024-06-10 10:14:46.958355] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bf5f400 00:10:41.529 [2024-06-10 10:14:46.958359] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bf5f400 00:10:41.529 [2024-06-10 10:14:46.958383] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.529 10:14:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.787 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.787 "name": "raid_bdev1", 00:10:41.787 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:41.787 "strip_size_kb": 64, 00:10:41.787 "state": "online", 00:10:41.787 "raid_level": "concat", 00:10:41.787 "superblock": true, 00:10:41.787 "num_base_bdevs": 3, 00:10:41.787 "num_base_bdevs_discovered": 3, 00:10:41.787 "num_base_bdevs_operational": 3, 00:10:41.787 "base_bdevs_list": [ 00:10:41.787 { 00:10:41.787 "name": "pt1", 00:10:41.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.787 "is_configured": true, 00:10:41.787 "data_offset": 2048, 00:10:41.787 "data_size": 63488 00:10:41.787 }, 00:10:41.787 { 00:10:41.787 "name": "pt2", 00:10:41.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.787 "is_configured": true, 00:10:41.787 "data_offset": 2048, 00:10:41.787 "data_size": 63488 00:10:41.787 }, 00:10:41.787 { 00:10:41.787 "name": "pt3", 00:10:41.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:41.787 "is_configured": true, 00:10:41.787 "data_offset": 2048, 00:10:41.787 "data_size": 63488 00:10:41.787 } 00:10:41.787 ] 00:10:41.787 }' 00:10:41.787 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.787 10:14:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:42.045 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:42.303 [2024-06-10 10:14:47.765746] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.303 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:42.303 "name": "raid_bdev1", 00:10:42.304 "aliases": [ 00:10:42.304 "43a55010-2712-11ef-b084-113036b5c18d" 00:10:42.304 ], 00:10:42.304 "product_name": "Raid Volume", 00:10:42.304 "block_size": 512, 00:10:42.304 "num_blocks": 190464, 00:10:42.304 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:42.304 "assigned_rate_limits": { 00:10:42.304 "rw_ios_per_sec": 0, 00:10:42.304 "rw_mbytes_per_sec": 0, 00:10:42.304 "r_mbytes_per_sec": 0, 00:10:42.304 "w_mbytes_per_sec": 0 00:10:42.304 }, 00:10:42.304 "claimed": false, 00:10:42.304 "zoned": false, 00:10:42.304 "supported_io_types": { 00:10:42.304 "read": true, 00:10:42.304 "write": true, 00:10:42.304 "unmap": true, 00:10:42.304 "write_zeroes": true, 00:10:42.304 "flush": true, 00:10:42.304 "reset": true, 00:10:42.304 "compare": false, 00:10:42.304 "compare_and_write": false, 00:10:42.304 "abort": false, 00:10:42.304 "nvme_admin": false, 00:10:42.304 "nvme_io": false 00:10:42.304 }, 00:10:42.304 "memory_domains": [ 00:10:42.304 { 00:10:42.304 "dma_device_id": "system", 00:10:42.304 "dma_device_type": 1 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.304 "dma_device_type": 2 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "dma_device_id": "system", 00:10:42.304 "dma_device_type": 1 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.304 "dma_device_type": 2 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "dma_device_id": "system", 00:10:42.304 "dma_device_type": 1 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.304 "dma_device_type": 2 00:10:42.304 } 00:10:42.304 ], 00:10:42.304 "driver_specific": { 00:10:42.304 "raid": { 00:10:42.304 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:42.304 "strip_size_kb": 64, 00:10:42.304 "state": "online", 00:10:42.304 "raid_level": "concat", 00:10:42.304 "superblock": true, 00:10:42.304 "num_base_bdevs": 3, 00:10:42.304 "num_base_bdevs_discovered": 3, 00:10:42.304 "num_base_bdevs_operational": 3, 00:10:42.304 "base_bdevs_list": [ 00:10:42.304 { 00:10:42.304 "name": "pt1", 00:10:42.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.304 "is_configured": true, 00:10:42.304 "data_offset": 2048, 00:10:42.304 "data_size": 63488 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "name": "pt2", 00:10:42.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.304 "is_configured": true, 00:10:42.304 "data_offset": 2048, 00:10:42.304 "data_size": 63488 00:10:42.304 }, 00:10:42.304 { 00:10:42.304 "name": "pt3", 00:10:42.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.304 "is_configured": true, 00:10:42.304 "data_offset": 2048, 00:10:42.304 "data_size": 63488 00:10:42.304 } 00:10:42.304 ] 00:10:42.304 } 00:10:42.304 } 00:10:42.304 }' 00:10:42.304 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.304 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:42.304 pt2 00:10:42.304 pt3' 00:10:42.304 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:42.304 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:42.304 10:14:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:42.563 "name": "pt1", 00:10:42.563 "aliases": [ 00:10:42.563 "00000000-0000-0000-0000-000000000001" 00:10:42.563 ], 00:10:42.563 "product_name": "passthru", 00:10:42.563 "block_size": 512, 00:10:42.563 "num_blocks": 65536, 00:10:42.563 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.563 "assigned_rate_limits": { 00:10:42.563 "rw_ios_per_sec": 0, 00:10:42.563 "rw_mbytes_per_sec": 0, 00:10:42.563 "r_mbytes_per_sec": 0, 00:10:42.563 "w_mbytes_per_sec": 0 00:10:42.563 }, 00:10:42.563 "claimed": true, 00:10:42.563 "claim_type": "exclusive_write", 00:10:42.563 "zoned": false, 00:10:42.563 "supported_io_types": { 00:10:42.563 "read": true, 00:10:42.563 "write": true, 00:10:42.563 "unmap": true, 00:10:42.563 "write_zeroes": true, 00:10:42.563 "flush": true, 00:10:42.563 "reset": true, 00:10:42.563 "compare": false, 00:10:42.563 "compare_and_write": false, 00:10:42.563 "abort": true, 00:10:42.563 "nvme_admin": false, 00:10:42.563 "nvme_io": false 00:10:42.563 }, 00:10:42.563 "memory_domains": [ 00:10:42.563 { 00:10:42.563 "dma_device_id": "system", 00:10:42.563 "dma_device_type": 1 00:10:42.563 }, 00:10:42.563 { 00:10:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.563 "dma_device_type": 2 00:10:42.563 } 00:10:42.563 ], 00:10:42.563 "driver_specific": { 00:10:42.563 "passthru": { 00:10:42.563 "name": "pt1", 00:10:42.563 "base_bdev_name": "malloc1" 00:10:42.563 } 00:10:42.563 } 00:10:42.563 }' 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:42.563 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:43.131 "name": "pt2", 00:10:43.131 "aliases": [ 00:10:43.131 "00000000-0000-0000-0000-000000000002" 00:10:43.131 ], 00:10:43.131 "product_name": "passthru", 00:10:43.131 "block_size": 512, 00:10:43.131 "num_blocks": 65536, 00:10:43.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.131 "assigned_rate_limits": { 00:10:43.131 "rw_ios_per_sec": 0, 00:10:43.131 "rw_mbytes_per_sec": 0, 00:10:43.131 "r_mbytes_per_sec": 0, 00:10:43.131 "w_mbytes_per_sec": 0 00:10:43.131 }, 00:10:43.131 "claimed": true, 00:10:43.131 "claim_type": "exclusive_write", 00:10:43.131 "zoned": false, 00:10:43.131 "supported_io_types": { 00:10:43.131 "read": true, 00:10:43.131 "write": true, 00:10:43.131 "unmap": true, 00:10:43.131 "write_zeroes": true, 00:10:43.131 "flush": true, 00:10:43.131 "reset": true, 00:10:43.131 "compare": false, 00:10:43.131 "compare_and_write": false, 00:10:43.131 "abort": true, 00:10:43.131 "nvme_admin": false, 00:10:43.131 "nvme_io": false 00:10:43.131 }, 00:10:43.131 "memory_domains": [ 00:10:43.131 { 00:10:43.131 "dma_device_id": "system", 00:10:43.131 "dma_device_type": 1 00:10:43.131 }, 00:10:43.131 { 00:10:43.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.131 "dma_device_type": 2 00:10:43.131 } 00:10:43.131 ], 00:10:43.131 "driver_specific": { 00:10:43.131 "passthru": { 00:10:43.131 "name": "pt2", 00:10:43.131 "base_bdev_name": "malloc2" 00:10:43.131 } 00:10:43.131 } 00:10:43.131 }' 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:43.131 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:43.390 "name": "pt3", 00:10:43.390 "aliases": [ 00:10:43.390 "00000000-0000-0000-0000-000000000003" 00:10:43.390 ], 00:10:43.390 "product_name": "passthru", 00:10:43.390 "block_size": 512, 00:10:43.390 "num_blocks": 65536, 00:10:43.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.390 "assigned_rate_limits": { 00:10:43.390 "rw_ios_per_sec": 0, 00:10:43.390 "rw_mbytes_per_sec": 0, 00:10:43.390 "r_mbytes_per_sec": 0, 00:10:43.390 "w_mbytes_per_sec": 0 00:10:43.390 }, 00:10:43.390 "claimed": true, 00:10:43.390 "claim_type": "exclusive_write", 00:10:43.390 "zoned": false, 00:10:43.390 "supported_io_types": { 00:10:43.390 "read": true, 00:10:43.390 "write": true, 00:10:43.390 "unmap": true, 00:10:43.390 "write_zeroes": true, 00:10:43.390 "flush": true, 00:10:43.390 "reset": true, 00:10:43.390 "compare": false, 00:10:43.390 "compare_and_write": false, 00:10:43.390 "abort": true, 00:10:43.390 "nvme_admin": false, 00:10:43.390 "nvme_io": false 00:10:43.390 }, 00:10:43.390 "memory_domains": [ 00:10:43.390 { 00:10:43.390 "dma_device_id": "system", 00:10:43.390 "dma_device_type": 1 00:10:43.390 }, 00:10:43.390 { 00:10:43.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.390 "dma_device_type": 2 00:10:43.390 } 00:10:43.390 ], 00:10:43.390 "driver_specific": { 00:10:43.390 "passthru": { 00:10:43.390 "name": "pt3", 00:10:43.390 "base_bdev_name": "malloc3" 00:10:43.390 } 00:10:43.390 } 00:10:43.390 }' 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:43.390 10:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:43.649 [2024-06-10 10:14:49.125870] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.649 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=43a55010-2712-11ef-b084-113036b5c18d 00:10:43.649 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 43a55010-2712-11ef-b084-113036b5c18d ']' 00:10:43.649 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:43.913 [2024-06-10 10:14:49.365824] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.913 [2024-06-10 10:14:49.365852] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.913 [2024-06-10 10:14:49.365872] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.913 [2024-06-10 10:14:49.365887] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.913 [2024-06-10 10:14:49.365891] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bf5f400 name raid_bdev1, state offline 00:10:43.913 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.913 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:44.172 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:44.172 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:44.172 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.172 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:44.430 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.430 10:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:44.997 10:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.997 10:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:45.255 10:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:45.822 [2024-06-10 10:14:51.245954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:45.822 [2024-06-10 10:14:51.246451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:45.822 [2024-06-10 10:14:51.246471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:45.822 [2024-06-10 10:14:51.246485] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:45.823 [2024-06-10 10:14:51.246527] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:45.823 [2024-06-10 10:14:51.246564] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:45.823 [2024-06-10 10:14:51.246577] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.823 [2024-06-10 10:14:51.246581] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bf5f180 name raid_bdev1, state configuring 00:10:45.823 request: 00:10:45.823 { 00:10:45.823 "name": "raid_bdev1", 00:10:45.823 "raid_level": "concat", 00:10:45.823 "base_bdevs": [ 00:10:45.823 "malloc1", 00:10:45.823 "malloc2", 00:10:45.823 "malloc3" 00:10:45.823 ], 00:10:45.823 "superblock": false, 00:10:45.823 "strip_size_kb": 64, 00:10:45.823 "method": "bdev_raid_create", 00:10:45.823 "req_id": 1 00:10:45.823 } 00:10:45.823 Got JSON-RPC error response 00:10:45.823 response: 00:10:45.823 { 00:10:45.823 "code": -17, 00:10:45.823 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:45.823 } 00:10:45.823 10:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:10:45.823 10:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:45.823 10:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:45.823 10:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:45.823 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.823 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:46.082 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:46.082 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:46.082 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.341 [2024-06-10 10:14:51.713948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.341 [2024-06-10 10:14:51.714008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.341 [2024-06-10 10:14:51.714020] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5ec80 00:10:46.341 [2024-06-10 10:14:51.714028] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.341 [2024-06-10 10:14:51.714541] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.341 [2024-06-10 10:14:51.714571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.341 [2024-06-10 10:14:51.714593] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:46.341 [2024-06-10 10:14:51.714602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:46.341 pt1 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.341 10:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.599 10:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.599 "name": "raid_bdev1", 00:10:46.599 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:46.599 "strip_size_kb": 64, 00:10:46.599 "state": "configuring", 00:10:46.599 "raid_level": "concat", 00:10:46.599 "superblock": true, 00:10:46.599 "num_base_bdevs": 3, 00:10:46.599 "num_base_bdevs_discovered": 1, 00:10:46.599 "num_base_bdevs_operational": 3, 00:10:46.599 "base_bdevs_list": [ 00:10:46.599 { 00:10:46.599 "name": "pt1", 00:10:46.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.599 "is_configured": true, 00:10:46.599 "data_offset": 2048, 00:10:46.599 "data_size": 63488 00:10:46.599 }, 00:10:46.599 { 00:10:46.599 "name": null, 00:10:46.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.599 "is_configured": false, 00:10:46.599 "data_offset": 2048, 00:10:46.599 "data_size": 63488 00:10:46.599 }, 00:10:46.599 { 00:10:46.599 "name": null, 00:10:46.599 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.599 "is_configured": false, 00:10:46.599 "data_offset": 2048, 00:10:46.599 "data_size": 63488 00:10:46.599 } 00:10:46.599 ] 00:10:46.599 }' 00:10:46.599 10:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.599 10:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.858 10:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:10:46.858 10:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.450 [2024-06-10 10:14:52.789993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.450 [2024-06-10 10:14:52.790053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.450 [2024-06-10 10:14:52.790066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5f680 00:10:47.450 [2024-06-10 10:14:52.790073] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.450 [2024-06-10 10:14:52.790172] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.450 [2024-06-10 10:14:52.790181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.450 [2024-06-10 10:14:52.790202] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:47.450 [2024-06-10 10:14:52.790209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.450 pt2 00:10:47.450 10:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:47.708 [2024-06-10 10:14:53.214011] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.708 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.966 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.966 "name": "raid_bdev1", 00:10:47.966 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:47.966 "strip_size_kb": 64, 00:10:47.966 "state": "configuring", 00:10:47.966 "raid_level": "concat", 00:10:47.966 "superblock": true, 00:10:47.966 "num_base_bdevs": 3, 00:10:47.966 "num_base_bdevs_discovered": 1, 00:10:47.966 "num_base_bdevs_operational": 3, 00:10:47.966 "base_bdevs_list": [ 00:10:47.966 { 00:10:47.966 "name": "pt1", 00:10:47.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.966 "is_configured": true, 00:10:47.966 "data_offset": 2048, 00:10:47.966 "data_size": 63488 00:10:47.966 }, 00:10:47.966 { 00:10:47.966 "name": null, 00:10:47.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.966 "is_configured": false, 00:10:47.966 "data_offset": 2048, 00:10:47.966 "data_size": 63488 00:10:47.966 }, 00:10:47.966 { 00:10:47.966 "name": null, 00:10:47.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.966 "is_configured": false, 00:10:47.966 "data_offset": 2048, 00:10:47.966 "data_size": 63488 00:10:47.966 } 00:10:47.966 ] 00:10:47.966 }' 00:10:47.966 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.966 10:14:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.226 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:48.226 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:48.226 10:14:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.793 [2024-06-10 10:14:54.202047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.793 [2024-06-10 10:14:54.202104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.793 [2024-06-10 10:14:54.202116] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5f680 00:10:48.794 [2024-06-10 10:14:54.202124] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.794 [2024-06-10 10:14:54.202222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.794 [2024-06-10 10:14:54.202238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.794 [2024-06-10 10:14:54.202259] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.794 [2024-06-10 10:14:54.202266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.794 pt2 00:10:48.794 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:48.794 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:48.794 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.053 [2024-06-10 10:14:54.438046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.053 [2024-06-10 10:14:54.438088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.053 [2024-06-10 10:14:54.438097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bf5f400 00:10:49.053 [2024-06-10 10:14:54.438104] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.053 [2024-06-10 10:14:54.438199] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.053 [2024-06-10 10:14:54.438207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.053 [2024-06-10 10:14:54.438225] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:49.053 [2024-06-10 10:14:54.438231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.053 [2024-06-10 10:14:54.438253] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bf5e780 00:10:49.053 [2024-06-10 10:14:54.438256] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:49.053 [2024-06-10 10:14:54.438275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bfc1e20 00:10:49.053 [2024-06-10 10:14:54.438314] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bf5e780 00:10:49.053 [2024-06-10 10:14:54.438318] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bf5e780 00:10:49.053 [2024-06-10 10:14:54.438335] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.053 pt3 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.053 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.311 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.311 "name": "raid_bdev1", 00:10:49.311 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:49.311 "strip_size_kb": 64, 00:10:49.311 "state": "online", 00:10:49.311 "raid_level": "concat", 00:10:49.311 "superblock": true, 00:10:49.311 "num_base_bdevs": 3, 00:10:49.311 "num_base_bdevs_discovered": 3, 00:10:49.311 "num_base_bdevs_operational": 3, 00:10:49.311 "base_bdevs_list": [ 00:10:49.311 { 00:10:49.311 "name": "pt1", 00:10:49.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.311 "is_configured": true, 00:10:49.311 "data_offset": 2048, 00:10:49.311 "data_size": 63488 00:10:49.311 }, 00:10:49.311 { 00:10:49.311 "name": "pt2", 00:10:49.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.311 "is_configured": true, 00:10:49.311 "data_offset": 2048, 00:10:49.311 "data_size": 63488 00:10:49.311 }, 00:10:49.311 { 00:10:49.311 "name": "pt3", 00:10:49.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.311 "is_configured": true, 00:10:49.311 "data_offset": 2048, 00:10:49.311 "data_size": 63488 00:10:49.311 } 00:10:49.311 ] 00:10:49.311 }' 00:10:49.311 10:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.311 10:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:49.596 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:49.855 [2024-06-10 10:14:55.234127] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.855 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:49.855 "name": "raid_bdev1", 00:10:49.855 "aliases": [ 00:10:49.855 "43a55010-2712-11ef-b084-113036b5c18d" 00:10:49.855 ], 00:10:49.855 "product_name": "Raid Volume", 00:10:49.855 "block_size": 512, 00:10:49.855 "num_blocks": 190464, 00:10:49.855 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:49.855 "assigned_rate_limits": { 00:10:49.855 "rw_ios_per_sec": 0, 00:10:49.855 "rw_mbytes_per_sec": 0, 00:10:49.855 "r_mbytes_per_sec": 0, 00:10:49.855 "w_mbytes_per_sec": 0 00:10:49.855 }, 00:10:49.855 "claimed": false, 00:10:49.855 "zoned": false, 00:10:49.855 "supported_io_types": { 00:10:49.855 "read": true, 00:10:49.855 "write": true, 00:10:49.855 "unmap": true, 00:10:49.855 "write_zeroes": true, 00:10:49.855 "flush": true, 00:10:49.855 "reset": true, 00:10:49.855 "compare": false, 00:10:49.855 "compare_and_write": false, 00:10:49.855 "abort": false, 00:10:49.855 "nvme_admin": false, 00:10:49.855 "nvme_io": false 00:10:49.855 }, 00:10:49.855 "memory_domains": [ 00:10:49.855 { 00:10:49.855 "dma_device_id": "system", 00:10:49.855 "dma_device_type": 1 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.855 "dma_device_type": 2 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "dma_device_id": "system", 00:10:49.855 "dma_device_type": 1 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.855 "dma_device_type": 2 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "dma_device_id": "system", 00:10:49.855 "dma_device_type": 1 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.855 "dma_device_type": 2 00:10:49.855 } 00:10:49.855 ], 00:10:49.855 "driver_specific": { 00:10:49.855 "raid": { 00:10:49.855 "uuid": "43a55010-2712-11ef-b084-113036b5c18d", 00:10:49.855 "strip_size_kb": 64, 00:10:49.855 "state": "online", 00:10:49.855 "raid_level": "concat", 00:10:49.855 "superblock": true, 00:10:49.855 "num_base_bdevs": 3, 00:10:49.855 "num_base_bdevs_discovered": 3, 00:10:49.855 "num_base_bdevs_operational": 3, 00:10:49.855 "base_bdevs_list": [ 00:10:49.855 { 00:10:49.855 "name": "pt1", 00:10:49.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.855 "is_configured": true, 00:10:49.855 "data_offset": 2048, 00:10:49.855 "data_size": 63488 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "name": "pt2", 00:10:49.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.855 "is_configured": true, 00:10:49.855 "data_offset": 2048, 00:10:49.855 "data_size": 63488 00:10:49.855 }, 00:10:49.855 { 00:10:49.855 "name": "pt3", 00:10:49.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.855 "is_configured": true, 00:10:49.855 "data_offset": 2048, 00:10:49.855 "data_size": 63488 00:10:49.855 } 00:10:49.855 ] 00:10:49.855 } 00:10:49.855 } 00:10:49.855 }' 00:10:49.855 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.855 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:49.855 pt2 00:10:49.855 pt3' 00:10:49.855 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:49.855 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:49.855 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.113 "name": "pt1", 00:10:50.113 "aliases": [ 00:10:50.113 "00000000-0000-0000-0000-000000000001" 00:10:50.113 ], 00:10:50.113 "product_name": "passthru", 00:10:50.113 "block_size": 512, 00:10:50.113 "num_blocks": 65536, 00:10:50.113 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.113 "assigned_rate_limits": { 00:10:50.113 "rw_ios_per_sec": 0, 00:10:50.113 "rw_mbytes_per_sec": 0, 00:10:50.113 "r_mbytes_per_sec": 0, 00:10:50.113 "w_mbytes_per_sec": 0 00:10:50.113 }, 00:10:50.113 "claimed": true, 00:10:50.113 "claim_type": "exclusive_write", 00:10:50.113 "zoned": false, 00:10:50.113 "supported_io_types": { 00:10:50.113 "read": true, 00:10:50.113 "write": true, 00:10:50.113 "unmap": true, 00:10:50.113 "write_zeroes": true, 00:10:50.113 "flush": true, 00:10:50.113 "reset": true, 00:10:50.113 "compare": false, 00:10:50.113 "compare_and_write": false, 00:10:50.113 "abort": true, 00:10:50.113 "nvme_admin": false, 00:10:50.113 "nvme_io": false 00:10:50.113 }, 00:10:50.113 "memory_domains": [ 00:10:50.113 { 00:10:50.113 "dma_device_id": "system", 00:10:50.113 "dma_device_type": 1 00:10:50.113 }, 00:10:50.113 { 00:10:50.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.113 "dma_device_type": 2 00:10:50.113 } 00:10:50.113 ], 00:10:50.113 "driver_specific": { 00:10:50.113 "passthru": { 00:10:50.113 "name": "pt1", 00:10:50.113 "base_bdev_name": "malloc1" 00:10:50.113 } 00:10:50.113 } 00:10:50.113 }' 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:50.113 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.371 "name": "pt2", 00:10:50.371 "aliases": [ 00:10:50.371 "00000000-0000-0000-0000-000000000002" 00:10:50.371 ], 00:10:50.371 "product_name": "passthru", 00:10:50.371 "block_size": 512, 00:10:50.371 "num_blocks": 65536, 00:10:50.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.371 "assigned_rate_limits": { 00:10:50.371 "rw_ios_per_sec": 0, 00:10:50.371 "rw_mbytes_per_sec": 0, 00:10:50.371 "r_mbytes_per_sec": 0, 00:10:50.371 "w_mbytes_per_sec": 0 00:10:50.371 }, 00:10:50.371 "claimed": true, 00:10:50.371 "claim_type": "exclusive_write", 00:10:50.371 "zoned": false, 00:10:50.371 "supported_io_types": { 00:10:50.371 "read": true, 00:10:50.371 "write": true, 00:10:50.371 "unmap": true, 00:10:50.371 "write_zeroes": true, 00:10:50.371 "flush": true, 00:10:50.371 "reset": true, 00:10:50.371 "compare": false, 00:10:50.371 "compare_and_write": false, 00:10:50.371 "abort": true, 00:10:50.371 "nvme_admin": false, 00:10:50.371 "nvme_io": false 00:10:50.371 }, 00:10:50.371 "memory_domains": [ 00:10:50.371 { 00:10:50.371 "dma_device_id": "system", 00:10:50.371 "dma_device_type": 1 00:10:50.371 }, 00:10:50.371 { 00:10:50.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.371 "dma_device_type": 2 00:10:50.371 } 00:10:50.371 ], 00:10:50.371 "driver_specific": { 00:10:50.371 "passthru": { 00:10:50.371 "name": "pt2", 00:10:50.371 "base_bdev_name": "malloc2" 00:10:50.371 } 00:10:50.371 } 00:10:50.371 }' 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:50.371 10:14:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:50.630 "name": "pt3", 00:10:50.630 "aliases": [ 00:10:50.630 "00000000-0000-0000-0000-000000000003" 00:10:50.630 ], 00:10:50.630 "product_name": "passthru", 00:10:50.630 "block_size": 512, 00:10:50.630 "num_blocks": 65536, 00:10:50.630 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.630 "assigned_rate_limits": { 00:10:50.630 "rw_ios_per_sec": 0, 00:10:50.630 "rw_mbytes_per_sec": 0, 00:10:50.630 "r_mbytes_per_sec": 0, 00:10:50.630 "w_mbytes_per_sec": 0 00:10:50.630 }, 00:10:50.630 "claimed": true, 00:10:50.630 "claim_type": "exclusive_write", 00:10:50.630 "zoned": false, 00:10:50.630 "supported_io_types": { 00:10:50.630 "read": true, 00:10:50.630 "write": true, 00:10:50.630 "unmap": true, 00:10:50.630 "write_zeroes": true, 00:10:50.630 "flush": true, 00:10:50.630 "reset": true, 00:10:50.630 "compare": false, 00:10:50.630 "compare_and_write": false, 00:10:50.630 "abort": true, 00:10:50.630 "nvme_admin": false, 00:10:50.630 "nvme_io": false 00:10:50.630 }, 00:10:50.630 "memory_domains": [ 00:10:50.630 { 00:10:50.630 "dma_device_id": "system", 00:10:50.630 "dma_device_type": 1 00:10:50.630 }, 00:10:50.630 { 00:10:50.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.630 "dma_device_type": 2 00:10:50.630 } 00:10:50.630 ], 00:10:50.630 "driver_specific": { 00:10:50.630 "passthru": { 00:10:50.630 "name": "pt3", 00:10:50.630 "base_bdev_name": "malloc3" 00:10:50.630 } 00:10:50.630 } 00:10:50.630 }' 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:50.630 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:50.888 [2024-06-10 10:14:56.398173] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 43a55010-2712-11ef-b084-113036b5c18d '!=' 43a55010-2712-11ef-b084-113036b5c18d ']' 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 56243 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 56243 ']' 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 56243 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 56243 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:10:50.888 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:10:50.888 killing process with pid 56243 00:10:50.889 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 56243' 00:10:50.889 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 56243 00:10:50.889 [2024-06-10 10:14:56.430728] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.889 [2024-06-10 10:14:56.430761] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.889 [2024-06-10 10:14:56.430776] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.889 [2024-06-10 10:14:56.430781] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bf5e780 name raid_bdev1, state offline 00:10:50.889 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 56243 00:10:50.889 [2024-06-10 10:14:56.445212] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.147 10:14:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:51.147 00:10:51.147 real 0m12.915s 00:10:51.147 user 0m23.238s 00:10:51.147 sys 0m1.831s 00:10:51.147 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:51.147 10:14:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.147 ************************************ 00:10:51.147 END TEST raid_superblock_test 00:10:51.147 ************************************ 00:10:51.147 10:14:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:51.147 10:14:56 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:10:51.147 10:14:56 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:51.147 10:14:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.147 ************************************ 00:10:51.147 START TEST raid_read_error_test 00:10:51.147 ************************************ 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 3 read 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:51.147 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.P6xae4Qc 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=56598 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 56598 /var/tmp/spdk-raid.sock 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 56598 ']' 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:51.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:51.148 10:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.148 [2024-06-10 10:14:56.678189] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:10:51.148 [2024-06-10 10:14:56.678407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:51.715 EAL: TSC is not safe to use in SMP mode 00:10:51.715 EAL: TSC is not invariant 00:10:51.715 [2024-06-10 10:14:57.140546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.715 [2024-06-10 10:14:57.246072] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:51.715 [2024-06-10 10:14:57.248254] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.715 [2024-06-10 10:14:57.248972] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.715 [2024-06-10 10:14:57.248985] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.282 10:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:52.282 10:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:10:52.282 10:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:52.282 10:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:52.282 BaseBdev1_malloc 00:10:52.282 10:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:52.540 true 00:10:52.540 10:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:52.798 [2024-06-10 10:14:58.371778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:52.798 [2024-06-10 10:14:58.371847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.798 [2024-06-10 10:14:58.371889] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be1b780 00:10:52.798 [2024-06-10 10:14:58.371897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.798 [2024-06-10 10:14:58.372402] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.798 [2024-06-10 10:14:58.372435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:52.798 BaseBdev1 00:10:52.798 10:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:52.798 10:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.056 BaseBdev2_malloc 00:10:53.056 10:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:53.314 true 00:10:53.572 10:14:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.829 [2024-06-10 10:14:59.191809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.829 [2024-06-10 10:14:59.191872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.829 [2024-06-10 10:14:59.191900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be1bc80 00:10:53.829 [2024-06-10 10:14:59.191908] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.829 [2024-06-10 10:14:59.192443] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.829 [2024-06-10 10:14:59.192472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.829 BaseBdev2 00:10:53.829 10:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:53.829 10:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.099 BaseBdev3_malloc 00:10:54.099 10:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:54.383 true 00:10:54.383 10:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:54.642 [2024-06-10 10:14:59.991828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:54.642 [2024-06-10 10:14:59.991897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.642 [2024-06-10 10:14:59.991925] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be1c180 00:10:54.642 [2024-06-10 10:14:59.991939] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.642 [2024-06-10 10:14:59.992471] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.642 [2024-06-10 10:14:59.992502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:54.642 BaseBdev3 00:10:54.642 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:54.642 [2024-06-10 10:15:00.243869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.642 [2024-06-10 10:15:00.244419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.642 [2024-06-10 10:15:00.244459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.642 [2024-06-10 10:15:00.244532] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be1c400 00:10:54.642 [2024-06-10 10:15:00.244541] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:54.642 [2024-06-10 10:15:00.244589] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be87e20 00:10:54.642 [2024-06-10 10:15:00.244661] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be1c400 00:10:54.642 [2024-06-10 10:15:00.244670] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82be1c400 00:10:54.642 [2024-06-10 10:15:00.244708] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.901 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.161 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.161 "name": "raid_bdev1", 00:10:55.161 "uuid": "4b909f98-2712-11ef-b084-113036b5c18d", 00:10:55.161 "strip_size_kb": 64, 00:10:55.161 "state": "online", 00:10:55.161 "raid_level": "concat", 00:10:55.161 "superblock": true, 00:10:55.161 "num_base_bdevs": 3, 00:10:55.161 "num_base_bdevs_discovered": 3, 00:10:55.161 "num_base_bdevs_operational": 3, 00:10:55.161 "base_bdevs_list": [ 00:10:55.161 { 00:10:55.161 "name": "BaseBdev1", 00:10:55.161 "uuid": "634642d5-2580-9c56-b20c-8451deb773ed", 00:10:55.161 "is_configured": true, 00:10:55.161 "data_offset": 2048, 00:10:55.161 "data_size": 63488 00:10:55.161 }, 00:10:55.161 { 00:10:55.161 "name": "BaseBdev2", 00:10:55.161 "uuid": "8e679504-f987-4655-b43a-71f9649b89c4", 00:10:55.161 "is_configured": true, 00:10:55.161 "data_offset": 2048, 00:10:55.161 "data_size": 63488 00:10:55.161 }, 00:10:55.161 { 00:10:55.161 "name": "BaseBdev3", 00:10:55.161 "uuid": "dfabffa9-b89a-a85f-b591-dc2b8d5885d9", 00:10:55.161 "is_configured": true, 00:10:55.161 "data_offset": 2048, 00:10:55.161 "data_size": 63488 00:10:55.161 } 00:10:55.161 ] 00:10:55.161 }' 00:10:55.161 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.161 10:15:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.420 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:55.420 10:15:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:55.679 [2024-06-10 10:15:01.035953] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be87ec0 00:10:56.617 10:15:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:56.875 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:56.875 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:10:56.875 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:56.875 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:56.875 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.876 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.135 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:57.135 "name": "raid_bdev1", 00:10:57.135 "uuid": "4b909f98-2712-11ef-b084-113036b5c18d", 00:10:57.135 "strip_size_kb": 64, 00:10:57.135 "state": "online", 00:10:57.135 "raid_level": "concat", 00:10:57.135 "superblock": true, 00:10:57.135 "num_base_bdevs": 3, 00:10:57.135 "num_base_bdevs_discovered": 3, 00:10:57.135 "num_base_bdevs_operational": 3, 00:10:57.135 "base_bdevs_list": [ 00:10:57.135 { 00:10:57.135 "name": "BaseBdev1", 00:10:57.135 "uuid": "634642d5-2580-9c56-b20c-8451deb773ed", 00:10:57.135 "is_configured": true, 00:10:57.135 "data_offset": 2048, 00:10:57.135 "data_size": 63488 00:10:57.135 }, 00:10:57.135 { 00:10:57.135 "name": "BaseBdev2", 00:10:57.135 "uuid": "8e679504-f987-4655-b43a-71f9649b89c4", 00:10:57.135 "is_configured": true, 00:10:57.135 "data_offset": 2048, 00:10:57.135 "data_size": 63488 00:10:57.135 }, 00:10:57.135 { 00:10:57.135 "name": "BaseBdev3", 00:10:57.135 "uuid": "dfabffa9-b89a-a85f-b591-dc2b8d5885d9", 00:10:57.135 "is_configured": true, 00:10:57.135 "data_offset": 2048, 00:10:57.135 "data_size": 63488 00:10:57.135 } 00:10:57.135 ] 00:10:57.135 }' 00:10:57.135 10:15:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:57.135 10:15:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.703 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:57.703 [2024-06-10 10:15:03.286054] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.703 [2024-06-10 10:15:03.286091] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.703 [2024-06-10 10:15:03.286494] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.703 [2024-06-10 10:15:03.286513] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.703 [2024-06-10 10:15:03.286523] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.703 [2024-06-10 10:15:03.286528] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be1c400 name raid_bdev1, state offline 00:10:57.703 0 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 56598 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 56598 ']' 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 56598 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 56598 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:10:57.962 killing process with pid 56598 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 56598' 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 56598 00:10:57.962 [2024-06-10 10:15:03.319161] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 56598 00:10:57.962 [2024-06-10 10:15:03.333739] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.P6xae4Qc 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:10:57.962 00:10:57.962 real 0m6.852s 00:10:57.962 user 0m10.999s 00:10:57.962 sys 0m1.012s 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:10:57.962 10:15:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.962 ************************************ 00:10:57.962 END TEST raid_read_error_test 00:10:57.962 ************************************ 00:10:57.962 10:15:03 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:57.962 10:15:03 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:10:57.962 10:15:03 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:10:57.962 10:15:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.962 ************************************ 00:10:57.962 START TEST raid_write_error_test 00:10:57.962 ************************************ 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 3 write 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:57.962 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.UfUHK3ri 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=56733 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 56733 /var/tmp/spdk-raid.sock 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 56733 ']' 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:10:58.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:10:58.221 10:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.221 [2024-06-10 10:15:03.579059] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:10:58.221 [2024-06-10 10:15:03.579412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:58.484 EAL: TSC is not safe to use in SMP mode 00:10:58.484 EAL: TSC is not invariant 00:10:58.484 [2024-06-10 10:15:04.056506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.748 [2024-06-10 10:15:04.143432] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:58.748 [2024-06-10 10:15:04.145711] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.748 [2024-06-10 10:15:04.146481] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.748 [2024-06-10 10:15:04.146495] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.315 10:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:10:59.315 10:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:10:59.315 10:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:59.315 10:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.315 BaseBdev1_malloc 00:10:59.574 10:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:59.574 true 00:10:59.832 10:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:00.091 [2024-06-10 10:15:05.482803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:00.091 [2024-06-10 10:15:05.482885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.091 [2024-06-10 10:15:05.482933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa780 00:11:00.091 [2024-06-10 10:15:05.482948] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.091 [2024-06-10 10:15:05.483662] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.091 [2024-06-10 10:15:05.483726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:00.091 BaseBdev1 00:11:00.091 10:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:00.091 10:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:00.349 BaseBdev2_malloc 00:11:00.349 10:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:00.631 true 00:11:00.631 10:15:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.632 [2024-06-10 10:15:06.182784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.632 [2024-06-10 10:15:06.182851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.632 [2024-06-10 10:15:06.182881] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfac80 00:11:00.632 [2024-06-10 10:15:06.182889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.632 [2024-06-10 10:15:06.183509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.632 [2024-06-10 10:15:06.183551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.632 BaseBdev2 00:11:00.632 10:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:00.632 10:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:00.889 BaseBdev3_malloc 00:11:00.889 10:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:01.146 true 00:11:01.146 10:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:01.402 [2024-06-10 10:15:06.938812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:01.402 [2024-06-10 10:15:06.938878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.402 [2024-06-10 10:15:06.938908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfb180 00:11:01.402 [2024-06-10 10:15:06.938916] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.402 [2024-06-10 10:15:06.939476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.402 [2024-06-10 10:15:06.939507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:01.402 BaseBdev3 00:11:01.403 10:15:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:01.660 [2024-06-10 10:15:07.162832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.660 [2024-06-10 10:15:07.163331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.660 [2024-06-10 10:15:07.163359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.660 [2024-06-10 10:15:07.163416] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abfb400 00:11:01.660 [2024-06-10 10:15:07.163421] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:01.660 [2024-06-10 10:15:07.163457] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac66e20 00:11:01.660 [2024-06-10 10:15:07.163514] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abfb400 00:11:01.660 [2024-06-10 10:15:07.163518] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abfb400 00:11:01.660 [2024-06-10 10:15:07.163542] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.660 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.918 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.918 "name": "raid_bdev1", 00:11:01.918 "uuid": "4fb05fa4-2712-11ef-b084-113036b5c18d", 00:11:01.918 "strip_size_kb": 64, 00:11:01.918 "state": "online", 00:11:01.918 "raid_level": "concat", 00:11:01.918 "superblock": true, 00:11:01.918 "num_base_bdevs": 3, 00:11:01.918 "num_base_bdevs_discovered": 3, 00:11:01.918 "num_base_bdevs_operational": 3, 00:11:01.918 "base_bdevs_list": [ 00:11:01.918 { 00:11:01.918 "name": "BaseBdev1", 00:11:01.918 "uuid": "89e1e745-d834-2658-9f08-7ec51b407e08", 00:11:01.918 "is_configured": true, 00:11:01.918 "data_offset": 2048, 00:11:01.918 "data_size": 63488 00:11:01.918 }, 00:11:01.918 { 00:11:01.918 "name": "BaseBdev2", 00:11:01.918 "uuid": "1acad02c-db7c-a952-b54d-c68d50465566", 00:11:01.918 "is_configured": true, 00:11:01.918 "data_offset": 2048, 00:11:01.918 "data_size": 63488 00:11:01.918 }, 00:11:01.918 { 00:11:01.918 "name": "BaseBdev3", 00:11:01.918 "uuid": "1d1dbc3a-3bdc-0457-8d1e-60db19afb773", 00:11:01.918 "is_configured": true, 00:11:01.918 "data_offset": 2048, 00:11:01.918 "data_size": 63488 00:11:01.918 } 00:11:01.918 ] 00:11:01.918 }' 00:11:01.918 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.918 10:15:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.486 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:02.486 10:15:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:02.486 [2024-06-10 10:15:07.994879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac66ec0 00:11:03.421 10:15:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.679 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.936 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.936 "name": "raid_bdev1", 00:11:03.936 "uuid": "4fb05fa4-2712-11ef-b084-113036b5c18d", 00:11:03.936 "strip_size_kb": 64, 00:11:03.936 "state": "online", 00:11:03.936 "raid_level": "concat", 00:11:03.936 "superblock": true, 00:11:03.936 "num_base_bdevs": 3, 00:11:03.936 "num_base_bdevs_discovered": 3, 00:11:03.936 "num_base_bdevs_operational": 3, 00:11:03.936 "base_bdevs_list": [ 00:11:03.936 { 00:11:03.936 "name": "BaseBdev1", 00:11:03.936 "uuid": "89e1e745-d834-2658-9f08-7ec51b407e08", 00:11:03.936 "is_configured": true, 00:11:03.936 "data_offset": 2048, 00:11:03.936 "data_size": 63488 00:11:03.936 }, 00:11:03.936 { 00:11:03.936 "name": "BaseBdev2", 00:11:03.936 "uuid": "1acad02c-db7c-a952-b54d-c68d50465566", 00:11:03.936 "is_configured": true, 00:11:03.936 "data_offset": 2048, 00:11:03.936 "data_size": 63488 00:11:03.936 }, 00:11:03.936 { 00:11:03.936 "name": "BaseBdev3", 00:11:03.936 "uuid": "1d1dbc3a-3bdc-0457-8d1e-60db19afb773", 00:11:03.936 "is_configured": true, 00:11:03.936 "data_offset": 2048, 00:11:03.936 "data_size": 63488 00:11:03.936 } 00:11:03.936 ] 00:11:03.936 }' 00:11:03.936 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.936 10:15:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.503 10:15:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:04.503 [2024-06-10 10:15:10.052564] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.503 [2024-06-10 10:15:10.052599] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.503 [2024-06-10 10:15:10.052957] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.503 [2024-06-10 10:15:10.052967] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.503 [2024-06-10 10:15:10.052974] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.503 [2024-06-10 10:15:10.052978] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abfb400 name raid_bdev1, state offline 00:11:04.503 0 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 56733 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 56733 ']' 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 56733 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 56733 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:11:04.503 killing process with pid 56733 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 56733' 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 56733 00:11:04.503 [2024-06-10 10:15:10.084555] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.503 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 56733 00:11:04.503 [2024-06-10 10:15:10.099354] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.UfUHK3ri 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:11:04.761 00:11:04.761 real 0m6.740s 00:11:04.761 user 0m10.658s 00:11:04.761 sys 0m1.094s 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:04.761 10:15:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.761 ************************************ 00:11:04.761 END TEST raid_write_error_test 00:11:04.761 ************************************ 00:11:04.761 10:15:10 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:04.761 10:15:10 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:04.761 10:15:10 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:11:04.761 10:15:10 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:04.761 10:15:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.761 ************************************ 00:11:04.761 START TEST raid_state_function_test 00:11:04.761 ************************************ 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 3 false 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:04.761 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56862 00:11:04.762 Process raid pid: 56862 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56862' 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56862 /var/tmp/spdk-raid.sock 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 56862 ']' 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:04.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:04.762 10:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.762 [2024-06-10 10:15:10.351390] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:04.762 [2024-06-10 10:15:10.351603] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:05.333 EAL: TSC is not safe to use in SMP mode 00:11:05.333 EAL: TSC is not invariant 00:11:05.333 [2024-06-10 10:15:10.850270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.590 [2024-06-10 10:15:10.970651] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:05.591 [2024-06-10 10:15:10.973357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.591 [2024-06-10 10:15:10.974296] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.591 [2024-06-10 10:15:10.974313] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.848 10:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:05.848 10:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:11:05.848 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:06.105 [2024-06-10 10:15:11.698634] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.105 [2024-06-10 10:15:11.698694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.105 [2024-06-10 10:15:11.698699] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.105 [2024-06-10 10:15:11.698707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.105 [2024-06-10 10:15:11.698711] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.105 [2024-06-10 10:15:11.698718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.362 10:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.619 10:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.619 "name": "Existed_Raid", 00:11:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.619 "strip_size_kb": 0, 00:11:06.619 "state": "configuring", 00:11:06.619 "raid_level": "raid1", 00:11:06.619 "superblock": false, 00:11:06.619 "num_base_bdevs": 3, 00:11:06.619 "num_base_bdevs_discovered": 0, 00:11:06.619 "num_base_bdevs_operational": 3, 00:11:06.619 "base_bdevs_list": [ 00:11:06.619 { 00:11:06.619 "name": "BaseBdev1", 00:11:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.619 "is_configured": false, 00:11:06.619 "data_offset": 0, 00:11:06.619 "data_size": 0 00:11:06.619 }, 00:11:06.619 { 00:11:06.619 "name": "BaseBdev2", 00:11:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.619 "is_configured": false, 00:11:06.619 "data_offset": 0, 00:11:06.619 "data_size": 0 00:11:06.619 }, 00:11:06.619 { 00:11:06.619 "name": "BaseBdev3", 00:11:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.619 "is_configured": false, 00:11:06.619 "data_offset": 0, 00:11:06.619 "data_size": 0 00:11:06.619 } 00:11:06.619 ] 00:11:06.619 }' 00:11:06.620 10:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.620 10:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.876 10:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:07.198 [2024-06-10 10:15:12.742842] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:07.198 [2024-06-10 10:15:12.742878] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a876500 name Existed_Raid, state configuring 00:11:07.198 10:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:07.456 [2024-06-10 10:15:13.002871] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:07.456 [2024-06-10 10:15:13.002945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:07.456 [2024-06-10 10:15:13.002955] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:07.456 [2024-06-10 10:15:13.002973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:07.456 [2024-06-10 10:15:13.002982] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:07.456 [2024-06-10 10:15:13.002998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:07.456 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.714 [2024-06-10 10:15:13.247791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.714 BaseBdev1 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:07.714 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:07.972 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:08.230 [ 00:11:08.230 { 00:11:08.230 "name": "BaseBdev1", 00:11:08.230 "aliases": [ 00:11:08.230 "5350b9c6-2712-11ef-b084-113036b5c18d" 00:11:08.230 ], 00:11:08.230 "product_name": "Malloc disk", 00:11:08.230 "block_size": 512, 00:11:08.230 "num_blocks": 65536, 00:11:08.230 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:08.230 "assigned_rate_limits": { 00:11:08.230 "rw_ios_per_sec": 0, 00:11:08.230 "rw_mbytes_per_sec": 0, 00:11:08.230 "r_mbytes_per_sec": 0, 00:11:08.230 "w_mbytes_per_sec": 0 00:11:08.230 }, 00:11:08.230 "claimed": true, 00:11:08.230 "claim_type": "exclusive_write", 00:11:08.230 "zoned": false, 00:11:08.230 "supported_io_types": { 00:11:08.230 "read": true, 00:11:08.230 "write": true, 00:11:08.230 "unmap": true, 00:11:08.230 "write_zeroes": true, 00:11:08.230 "flush": true, 00:11:08.230 "reset": true, 00:11:08.230 "compare": false, 00:11:08.230 "compare_and_write": false, 00:11:08.230 "abort": true, 00:11:08.230 "nvme_admin": false, 00:11:08.230 "nvme_io": false 00:11:08.230 }, 00:11:08.230 "memory_domains": [ 00:11:08.230 { 00:11:08.230 "dma_device_id": "system", 00:11:08.230 "dma_device_type": 1 00:11:08.230 }, 00:11:08.230 { 00:11:08.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.230 "dma_device_type": 2 00:11:08.230 } 00:11:08.230 ], 00:11:08.230 "driver_specific": {} 00:11:08.230 } 00:11:08.230 ] 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:08.230 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:08.231 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.231 10:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.489 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:08.489 "name": "Existed_Raid", 00:11:08.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.489 "strip_size_kb": 0, 00:11:08.489 "state": "configuring", 00:11:08.489 "raid_level": "raid1", 00:11:08.489 "superblock": false, 00:11:08.489 "num_base_bdevs": 3, 00:11:08.489 "num_base_bdevs_discovered": 1, 00:11:08.489 "num_base_bdevs_operational": 3, 00:11:08.489 "base_bdevs_list": [ 00:11:08.489 { 00:11:08.489 "name": "BaseBdev1", 00:11:08.489 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:08.489 "is_configured": true, 00:11:08.489 "data_offset": 0, 00:11:08.489 "data_size": 65536 00:11:08.489 }, 00:11:08.489 { 00:11:08.489 "name": "BaseBdev2", 00:11:08.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.489 "is_configured": false, 00:11:08.489 "data_offset": 0, 00:11:08.489 "data_size": 0 00:11:08.489 }, 00:11:08.489 { 00:11:08.489 "name": "BaseBdev3", 00:11:08.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.489 "is_configured": false, 00:11:08.489 "data_offset": 0, 00:11:08.489 "data_size": 0 00:11:08.489 } 00:11:08.489 ] 00:11:08.489 }' 00:11:08.489 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:08.489 10:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.054 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:09.054 [2024-06-10 10:15:14.650898] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.054 [2024-06-10 10:15:14.650930] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a876500 name Existed_Raid, state configuring 00:11:09.312 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:09.570 [2024-06-10 10:15:14.962935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.570 [2024-06-10 10:15:14.963644] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.570 [2024-06-10 10:15:14.963690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.570 [2024-06-10 10:15:14.963699] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.570 [2024-06-10 10:15:14.963723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.570 10:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.854 10:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:09.854 "name": "Existed_Raid", 00:11:09.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.854 "strip_size_kb": 0, 00:11:09.854 "state": "configuring", 00:11:09.854 "raid_level": "raid1", 00:11:09.854 "superblock": false, 00:11:09.854 "num_base_bdevs": 3, 00:11:09.854 "num_base_bdevs_discovered": 1, 00:11:09.854 "num_base_bdevs_operational": 3, 00:11:09.854 "base_bdevs_list": [ 00:11:09.854 { 00:11:09.854 "name": "BaseBdev1", 00:11:09.854 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:09.854 "is_configured": true, 00:11:09.854 "data_offset": 0, 00:11:09.854 "data_size": 65536 00:11:09.854 }, 00:11:09.854 { 00:11:09.854 "name": "BaseBdev2", 00:11:09.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.854 "is_configured": false, 00:11:09.854 "data_offset": 0, 00:11:09.854 "data_size": 0 00:11:09.854 }, 00:11:09.854 { 00:11:09.854 "name": "BaseBdev3", 00:11:09.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.854 "is_configured": false, 00:11:09.854 "data_offset": 0, 00:11:09.854 "data_size": 0 00:11:09.854 } 00:11:09.854 ] 00:11:09.854 }' 00:11:09.854 10:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:09.854 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.112 10:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:10.371 [2024-06-10 10:15:15.859045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.371 BaseBdev2 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:10.371 10:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:10.634 10:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:10.896 [ 00:11:10.896 { 00:11:10.896 "name": "BaseBdev2", 00:11:10.896 "aliases": [ 00:11:10.896 "54df4b6f-2712-11ef-b084-113036b5c18d" 00:11:10.896 ], 00:11:10.896 "product_name": "Malloc disk", 00:11:10.896 "block_size": 512, 00:11:10.896 "num_blocks": 65536, 00:11:10.896 "uuid": "54df4b6f-2712-11ef-b084-113036b5c18d", 00:11:10.896 "assigned_rate_limits": { 00:11:10.896 "rw_ios_per_sec": 0, 00:11:10.896 "rw_mbytes_per_sec": 0, 00:11:10.896 "r_mbytes_per_sec": 0, 00:11:10.896 "w_mbytes_per_sec": 0 00:11:10.896 }, 00:11:10.896 "claimed": true, 00:11:10.896 "claim_type": "exclusive_write", 00:11:10.896 "zoned": false, 00:11:10.896 "supported_io_types": { 00:11:10.896 "read": true, 00:11:10.896 "write": true, 00:11:10.896 "unmap": true, 00:11:10.896 "write_zeroes": true, 00:11:10.896 "flush": true, 00:11:10.896 "reset": true, 00:11:10.896 "compare": false, 00:11:10.896 "compare_and_write": false, 00:11:10.896 "abort": true, 00:11:10.896 "nvme_admin": false, 00:11:10.896 "nvme_io": false 00:11:10.896 }, 00:11:10.896 "memory_domains": [ 00:11:10.896 { 00:11:10.896 "dma_device_id": "system", 00:11:10.896 "dma_device_type": 1 00:11:10.896 }, 00:11:10.896 { 00:11:10.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.896 "dma_device_type": 2 00:11:10.896 } 00:11:10.896 ], 00:11:10.896 "driver_specific": {} 00:11:10.896 } 00:11:10.896 ] 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.896 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.156 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.156 "name": "Existed_Raid", 00:11:11.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.156 "strip_size_kb": 0, 00:11:11.156 "state": "configuring", 00:11:11.156 "raid_level": "raid1", 00:11:11.156 "superblock": false, 00:11:11.156 "num_base_bdevs": 3, 00:11:11.156 "num_base_bdevs_discovered": 2, 00:11:11.156 "num_base_bdevs_operational": 3, 00:11:11.156 "base_bdevs_list": [ 00:11:11.156 { 00:11:11.156 "name": "BaseBdev1", 00:11:11.156 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:11.156 "is_configured": true, 00:11:11.156 "data_offset": 0, 00:11:11.156 "data_size": 65536 00:11:11.156 }, 00:11:11.156 { 00:11:11.156 "name": "BaseBdev2", 00:11:11.156 "uuid": "54df4b6f-2712-11ef-b084-113036b5c18d", 00:11:11.156 "is_configured": true, 00:11:11.156 "data_offset": 0, 00:11:11.156 "data_size": 65536 00:11:11.156 }, 00:11:11.156 { 00:11:11.156 "name": "BaseBdev3", 00:11:11.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.156 "is_configured": false, 00:11:11.156 "data_offset": 0, 00:11:11.156 "data_size": 0 00:11:11.156 } 00:11:11.156 ] 00:11:11.156 }' 00:11:11.156 10:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.156 10:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.723 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:11.982 [2024-06-10 10:15:17.355108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.982 [2024-06-10 10:15:17.355140] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a876a00 00:11:11.982 [2024-06-10 10:15:17.355144] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:11.982 [2024-06-10 10:15:17.355164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8d9ec0 00:11:11.982 [2024-06-10 10:15:17.355264] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a876a00 00:11:11.982 [2024-06-10 10:15:17.355268] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a876a00 00:11:11.982 [2024-06-10 10:15:17.355297] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.982 BaseBdev3 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:11.982 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:12.241 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.499 [ 00:11:12.499 { 00:11:12.499 "name": "BaseBdev3", 00:11:12.499 "aliases": [ 00:11:12.499 "55c393ea-2712-11ef-b084-113036b5c18d" 00:11:12.499 ], 00:11:12.499 "product_name": "Malloc disk", 00:11:12.499 "block_size": 512, 00:11:12.499 "num_blocks": 65536, 00:11:12.499 "uuid": "55c393ea-2712-11ef-b084-113036b5c18d", 00:11:12.499 "assigned_rate_limits": { 00:11:12.499 "rw_ios_per_sec": 0, 00:11:12.499 "rw_mbytes_per_sec": 0, 00:11:12.499 "r_mbytes_per_sec": 0, 00:11:12.499 "w_mbytes_per_sec": 0 00:11:12.499 }, 00:11:12.499 "claimed": true, 00:11:12.499 "claim_type": "exclusive_write", 00:11:12.499 "zoned": false, 00:11:12.499 "supported_io_types": { 00:11:12.499 "read": true, 00:11:12.499 "write": true, 00:11:12.499 "unmap": true, 00:11:12.499 "write_zeroes": true, 00:11:12.499 "flush": true, 00:11:12.499 "reset": true, 00:11:12.499 "compare": false, 00:11:12.499 "compare_and_write": false, 00:11:12.499 "abort": true, 00:11:12.499 "nvme_admin": false, 00:11:12.499 "nvme_io": false 00:11:12.499 }, 00:11:12.499 "memory_domains": [ 00:11:12.499 { 00:11:12.499 "dma_device_id": "system", 00:11:12.499 "dma_device_type": 1 00:11:12.499 }, 00:11:12.499 { 00:11:12.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.499 "dma_device_type": 2 00:11:12.499 } 00:11:12.499 ], 00:11:12.499 "driver_specific": {} 00:11:12.499 } 00:11:12.499 ] 00:11:12.499 10:15:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:12.499 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:12.499 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:12.499 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.500 10:15:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.758 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:12.758 "name": "Existed_Raid", 00:11:12.758 "uuid": "55c39988-2712-11ef-b084-113036b5c18d", 00:11:12.758 "strip_size_kb": 0, 00:11:12.759 "state": "online", 00:11:12.759 "raid_level": "raid1", 00:11:12.759 "superblock": false, 00:11:12.759 "num_base_bdevs": 3, 00:11:12.759 "num_base_bdevs_discovered": 3, 00:11:12.759 "num_base_bdevs_operational": 3, 00:11:12.759 "base_bdevs_list": [ 00:11:12.759 { 00:11:12.759 "name": "BaseBdev1", 00:11:12.759 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:12.759 "is_configured": true, 00:11:12.759 "data_offset": 0, 00:11:12.759 "data_size": 65536 00:11:12.759 }, 00:11:12.759 { 00:11:12.759 "name": "BaseBdev2", 00:11:12.759 "uuid": "54df4b6f-2712-11ef-b084-113036b5c18d", 00:11:12.759 "is_configured": true, 00:11:12.759 "data_offset": 0, 00:11:12.759 "data_size": 65536 00:11:12.759 }, 00:11:12.759 { 00:11:12.759 "name": "BaseBdev3", 00:11:12.759 "uuid": "55c393ea-2712-11ef-b084-113036b5c18d", 00:11:12.759 "is_configured": true, 00:11:12.759 "data_offset": 0, 00:11:12.759 "data_size": 65536 00:11:12.759 } 00:11:12.759 ] 00:11:12.759 }' 00:11:12.759 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:12.759 10:15:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:13.017 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:13.276 [2024-06-10 10:15:18.727070] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.276 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:13.276 "name": "Existed_Raid", 00:11:13.276 "aliases": [ 00:11:13.276 "55c39988-2712-11ef-b084-113036b5c18d" 00:11:13.276 ], 00:11:13.276 "product_name": "Raid Volume", 00:11:13.276 "block_size": 512, 00:11:13.276 "num_blocks": 65536, 00:11:13.276 "uuid": "55c39988-2712-11ef-b084-113036b5c18d", 00:11:13.276 "assigned_rate_limits": { 00:11:13.276 "rw_ios_per_sec": 0, 00:11:13.276 "rw_mbytes_per_sec": 0, 00:11:13.276 "r_mbytes_per_sec": 0, 00:11:13.276 "w_mbytes_per_sec": 0 00:11:13.276 }, 00:11:13.276 "claimed": false, 00:11:13.276 "zoned": false, 00:11:13.276 "supported_io_types": { 00:11:13.276 "read": true, 00:11:13.276 "write": true, 00:11:13.276 "unmap": false, 00:11:13.276 "write_zeroes": true, 00:11:13.276 "flush": false, 00:11:13.276 "reset": true, 00:11:13.276 "compare": false, 00:11:13.276 "compare_and_write": false, 00:11:13.276 "abort": false, 00:11:13.276 "nvme_admin": false, 00:11:13.276 "nvme_io": false 00:11:13.276 }, 00:11:13.276 "memory_domains": [ 00:11:13.276 { 00:11:13.276 "dma_device_id": "system", 00:11:13.276 "dma_device_type": 1 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.276 "dma_device_type": 2 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "dma_device_id": "system", 00:11:13.276 "dma_device_type": 1 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.276 "dma_device_type": 2 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "dma_device_id": "system", 00:11:13.276 "dma_device_type": 1 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.276 "dma_device_type": 2 00:11:13.276 } 00:11:13.276 ], 00:11:13.276 "driver_specific": { 00:11:13.276 "raid": { 00:11:13.276 "uuid": "55c39988-2712-11ef-b084-113036b5c18d", 00:11:13.276 "strip_size_kb": 0, 00:11:13.276 "state": "online", 00:11:13.276 "raid_level": "raid1", 00:11:13.276 "superblock": false, 00:11:13.276 "num_base_bdevs": 3, 00:11:13.276 "num_base_bdevs_discovered": 3, 00:11:13.276 "num_base_bdevs_operational": 3, 00:11:13.276 "base_bdevs_list": [ 00:11:13.276 { 00:11:13.276 "name": "BaseBdev1", 00:11:13.276 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:13.276 "is_configured": true, 00:11:13.276 "data_offset": 0, 00:11:13.276 "data_size": 65536 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "name": "BaseBdev2", 00:11:13.276 "uuid": "54df4b6f-2712-11ef-b084-113036b5c18d", 00:11:13.276 "is_configured": true, 00:11:13.276 "data_offset": 0, 00:11:13.276 "data_size": 65536 00:11:13.276 }, 00:11:13.276 { 00:11:13.276 "name": "BaseBdev3", 00:11:13.276 "uuid": "55c393ea-2712-11ef-b084-113036b5c18d", 00:11:13.276 "is_configured": true, 00:11:13.276 "data_offset": 0, 00:11:13.276 "data_size": 65536 00:11:13.276 } 00:11:13.276 ] 00:11:13.276 } 00:11:13.276 } 00:11:13.276 }' 00:11:13.276 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.276 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:13.276 BaseBdev2 00:11:13.276 BaseBdev3' 00:11:13.276 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:13.276 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:13.276 10:15:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.534 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.534 "name": "BaseBdev1", 00:11:13.534 "aliases": [ 00:11:13.534 "5350b9c6-2712-11ef-b084-113036b5c18d" 00:11:13.534 ], 00:11:13.534 "product_name": "Malloc disk", 00:11:13.534 "block_size": 512, 00:11:13.534 "num_blocks": 65536, 00:11:13.534 "uuid": "5350b9c6-2712-11ef-b084-113036b5c18d", 00:11:13.534 "assigned_rate_limits": { 00:11:13.534 "rw_ios_per_sec": 0, 00:11:13.534 "rw_mbytes_per_sec": 0, 00:11:13.534 "r_mbytes_per_sec": 0, 00:11:13.534 "w_mbytes_per_sec": 0 00:11:13.534 }, 00:11:13.534 "claimed": true, 00:11:13.534 "claim_type": "exclusive_write", 00:11:13.534 "zoned": false, 00:11:13.534 "supported_io_types": { 00:11:13.534 "read": true, 00:11:13.534 "write": true, 00:11:13.534 "unmap": true, 00:11:13.534 "write_zeroes": true, 00:11:13.534 "flush": true, 00:11:13.534 "reset": true, 00:11:13.534 "compare": false, 00:11:13.534 "compare_and_write": false, 00:11:13.534 "abort": true, 00:11:13.534 "nvme_admin": false, 00:11:13.534 "nvme_io": false 00:11:13.534 }, 00:11:13.534 "memory_domains": [ 00:11:13.534 { 00:11:13.534 "dma_device_id": "system", 00:11:13.534 "dma_device_type": 1 00:11:13.534 }, 00:11:13.534 { 00:11:13.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.535 "dma_device_type": 2 00:11:13.535 } 00:11:13.535 ], 00:11:13.535 "driver_specific": {} 00:11:13.535 }' 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:13.535 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.793 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.794 "name": "BaseBdev2", 00:11:13.794 "aliases": [ 00:11:13.794 "54df4b6f-2712-11ef-b084-113036b5c18d" 00:11:13.794 ], 00:11:13.794 "product_name": "Malloc disk", 00:11:13.794 "block_size": 512, 00:11:13.794 "num_blocks": 65536, 00:11:13.794 "uuid": "54df4b6f-2712-11ef-b084-113036b5c18d", 00:11:13.794 "assigned_rate_limits": { 00:11:13.794 "rw_ios_per_sec": 0, 00:11:13.794 "rw_mbytes_per_sec": 0, 00:11:13.794 "r_mbytes_per_sec": 0, 00:11:13.794 "w_mbytes_per_sec": 0 00:11:13.794 }, 00:11:13.794 "claimed": true, 00:11:13.794 "claim_type": "exclusive_write", 00:11:13.794 "zoned": false, 00:11:13.794 "supported_io_types": { 00:11:13.794 "read": true, 00:11:13.794 "write": true, 00:11:13.794 "unmap": true, 00:11:13.794 "write_zeroes": true, 00:11:13.794 "flush": true, 00:11:13.794 "reset": true, 00:11:13.794 "compare": false, 00:11:13.794 "compare_and_write": false, 00:11:13.794 "abort": true, 00:11:13.794 "nvme_admin": false, 00:11:13.794 "nvme_io": false 00:11:13.794 }, 00:11:13.794 "memory_domains": [ 00:11:13.794 { 00:11:13.794 "dma_device_id": "system", 00:11:13.794 "dma_device_type": 1 00:11:13.794 }, 00:11:13.794 { 00:11:13.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.794 "dma_device_type": 2 00:11:13.794 } 00:11:13.794 ], 00:11:13.794 "driver_specific": {} 00:11:13.794 }' 00:11:13.794 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.794 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.794 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.794 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.794 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:14.053 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:14.311 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:14.311 "name": "BaseBdev3", 00:11:14.311 "aliases": [ 00:11:14.311 "55c393ea-2712-11ef-b084-113036b5c18d" 00:11:14.311 ], 00:11:14.311 "product_name": "Malloc disk", 00:11:14.311 "block_size": 512, 00:11:14.311 "num_blocks": 65536, 00:11:14.311 "uuid": "55c393ea-2712-11ef-b084-113036b5c18d", 00:11:14.311 "assigned_rate_limits": { 00:11:14.311 "rw_ios_per_sec": 0, 00:11:14.311 "rw_mbytes_per_sec": 0, 00:11:14.312 "r_mbytes_per_sec": 0, 00:11:14.312 "w_mbytes_per_sec": 0 00:11:14.312 }, 00:11:14.312 "claimed": true, 00:11:14.312 "claim_type": "exclusive_write", 00:11:14.312 "zoned": false, 00:11:14.312 "supported_io_types": { 00:11:14.312 "read": true, 00:11:14.312 "write": true, 00:11:14.312 "unmap": true, 00:11:14.312 "write_zeroes": true, 00:11:14.312 "flush": true, 00:11:14.312 "reset": true, 00:11:14.312 "compare": false, 00:11:14.312 "compare_and_write": false, 00:11:14.312 "abort": true, 00:11:14.312 "nvme_admin": false, 00:11:14.312 "nvme_io": false 00:11:14.312 }, 00:11:14.312 "memory_domains": [ 00:11:14.312 { 00:11:14.312 "dma_device_id": "system", 00:11:14.312 "dma_device_type": 1 00:11:14.312 }, 00:11:14.312 { 00:11:14.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.312 "dma_device_type": 2 00:11:14.312 } 00:11:14.312 ], 00:11:14.312 "driver_specific": {} 00:11:14.312 }' 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:14.312 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:14.570 [2024-06-10 10:15:19.963063] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.570 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.571 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.571 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.571 10:15:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.829 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:14.829 "name": "Existed_Raid", 00:11:14.829 "uuid": "55c39988-2712-11ef-b084-113036b5c18d", 00:11:14.829 "strip_size_kb": 0, 00:11:14.829 "state": "online", 00:11:14.829 "raid_level": "raid1", 00:11:14.829 "superblock": false, 00:11:14.829 "num_base_bdevs": 3, 00:11:14.829 "num_base_bdevs_discovered": 2, 00:11:14.829 "num_base_bdevs_operational": 2, 00:11:14.829 "base_bdevs_list": [ 00:11:14.829 { 00:11:14.829 "name": null, 00:11:14.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.829 "is_configured": false, 00:11:14.829 "data_offset": 0, 00:11:14.829 "data_size": 65536 00:11:14.829 }, 00:11:14.829 { 00:11:14.829 "name": "BaseBdev2", 00:11:14.829 "uuid": "54df4b6f-2712-11ef-b084-113036b5c18d", 00:11:14.829 "is_configured": true, 00:11:14.829 "data_offset": 0, 00:11:14.829 "data_size": 65536 00:11:14.829 }, 00:11:14.829 { 00:11:14.829 "name": "BaseBdev3", 00:11:14.829 "uuid": "55c393ea-2712-11ef-b084-113036b5c18d", 00:11:14.829 "is_configured": true, 00:11:14.829 "data_offset": 0, 00:11:14.829 "data_size": 65536 00:11:14.829 } 00:11:14.829 ] 00:11:14.829 }' 00:11:14.829 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:14.829 10:15:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.087 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:15.087 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:15.087 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.087 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:15.346 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:15.346 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.346 10:15:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:15.605 [2024-06-10 10:15:21.016038] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.605 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:15.605 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:15.605 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:15.605 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.863 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:15.863 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:15.863 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:16.121 [2024-06-10 10:15:21.560984] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.121 [2024-06-10 10:15:21.561025] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.121 [2024-06-10 10:15:21.565929] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.121 [2024-06-10 10:15:21.565962] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.121 [2024-06-10 10:15:21.565966] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a876a00 name Existed_Raid, state offline 00:11:16.121 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:16.121 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:16.121 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.121 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.380 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:16.380 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:16.380 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:16.380 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:16.380 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:16.380 10:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.646 BaseBdev2 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:16.646 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:16.906 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.163 [ 00:11:17.163 { 00:11:17.163 "name": "BaseBdev2", 00:11:17.163 "aliases": [ 00:11:17.163 "589268d0-2712-11ef-b084-113036b5c18d" 00:11:17.163 ], 00:11:17.163 "product_name": "Malloc disk", 00:11:17.163 "block_size": 512, 00:11:17.163 "num_blocks": 65536, 00:11:17.163 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:17.163 "assigned_rate_limits": { 00:11:17.163 "rw_ios_per_sec": 0, 00:11:17.163 "rw_mbytes_per_sec": 0, 00:11:17.163 "r_mbytes_per_sec": 0, 00:11:17.163 "w_mbytes_per_sec": 0 00:11:17.163 }, 00:11:17.163 "claimed": false, 00:11:17.163 "zoned": false, 00:11:17.163 "supported_io_types": { 00:11:17.163 "read": true, 00:11:17.163 "write": true, 00:11:17.163 "unmap": true, 00:11:17.163 "write_zeroes": true, 00:11:17.163 "flush": true, 00:11:17.163 "reset": true, 00:11:17.163 "compare": false, 00:11:17.163 "compare_and_write": false, 00:11:17.163 "abort": true, 00:11:17.163 "nvme_admin": false, 00:11:17.163 "nvme_io": false 00:11:17.163 }, 00:11:17.163 "memory_domains": [ 00:11:17.163 { 00:11:17.163 "dma_device_id": "system", 00:11:17.163 "dma_device_type": 1 00:11:17.163 }, 00:11:17.163 { 00:11:17.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.164 "dma_device_type": 2 00:11:17.164 } 00:11:17.164 ], 00:11:17.164 "driver_specific": {} 00:11:17.164 } 00:11:17.164 ] 00:11:17.164 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:17.164 10:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:17.164 10:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:17.164 10:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.164 BaseBdev3 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:17.422 10:15:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.989 [ 00:11:17.989 { 00:11:17.990 "name": "BaseBdev3", 00:11:17.990 "aliases": [ 00:11:17.990 "58fc0080-2712-11ef-b084-113036b5c18d" 00:11:17.990 ], 00:11:17.990 "product_name": "Malloc disk", 00:11:17.990 "block_size": 512, 00:11:17.990 "num_blocks": 65536, 00:11:17.990 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:17.990 "assigned_rate_limits": { 00:11:17.990 "rw_ios_per_sec": 0, 00:11:17.990 "rw_mbytes_per_sec": 0, 00:11:17.990 "r_mbytes_per_sec": 0, 00:11:17.990 "w_mbytes_per_sec": 0 00:11:17.990 }, 00:11:17.990 "claimed": false, 00:11:17.990 "zoned": false, 00:11:17.990 "supported_io_types": { 00:11:17.990 "read": true, 00:11:17.990 "write": true, 00:11:17.990 "unmap": true, 00:11:17.990 "write_zeroes": true, 00:11:17.990 "flush": true, 00:11:17.990 "reset": true, 00:11:17.990 "compare": false, 00:11:17.990 "compare_and_write": false, 00:11:17.990 "abort": true, 00:11:17.990 "nvme_admin": false, 00:11:17.990 "nvme_io": false 00:11:17.990 }, 00:11:17.990 "memory_domains": [ 00:11:17.990 { 00:11:17.990 "dma_device_id": "system", 00:11:17.990 "dma_device_type": 1 00:11:17.990 }, 00:11:17.990 { 00:11:17.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.990 "dma_device_type": 2 00:11:17.990 } 00:11:17.990 ], 00:11:17.990 "driver_specific": {} 00:11:17.990 } 00:11:17.990 ] 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:17.990 [2024-06-10 10:15:23.509986] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.990 [2024-06-10 10:15:23.510037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.990 [2024-06-10 10:15:23.510046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.990 [2024-06-10 10:15:23.510500] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.990 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.249 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.249 "name": "Existed_Raid", 00:11:18.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.249 "strip_size_kb": 0, 00:11:18.249 "state": "configuring", 00:11:18.249 "raid_level": "raid1", 00:11:18.249 "superblock": false, 00:11:18.249 "num_base_bdevs": 3, 00:11:18.249 "num_base_bdevs_discovered": 2, 00:11:18.249 "num_base_bdevs_operational": 3, 00:11:18.249 "base_bdevs_list": [ 00:11:18.249 { 00:11:18.249 "name": "BaseBdev1", 00:11:18.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.249 "is_configured": false, 00:11:18.249 "data_offset": 0, 00:11:18.249 "data_size": 0 00:11:18.249 }, 00:11:18.249 { 00:11:18.249 "name": "BaseBdev2", 00:11:18.249 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:18.249 "is_configured": true, 00:11:18.249 "data_offset": 0, 00:11:18.249 "data_size": 65536 00:11:18.249 }, 00:11:18.249 { 00:11:18.249 "name": "BaseBdev3", 00:11:18.250 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:18.250 "is_configured": true, 00:11:18.250 "data_offset": 0, 00:11:18.250 "data_size": 65536 00:11:18.250 } 00:11:18.250 ] 00:11:18.250 }' 00:11:18.250 10:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.250 10:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.817 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:19.076 [2024-06-10 10:15:24.518050] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.076 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.335 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.335 "name": "Existed_Raid", 00:11:19.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.335 "strip_size_kb": 0, 00:11:19.335 "state": "configuring", 00:11:19.335 "raid_level": "raid1", 00:11:19.335 "superblock": false, 00:11:19.335 "num_base_bdevs": 3, 00:11:19.335 "num_base_bdevs_discovered": 1, 00:11:19.335 "num_base_bdevs_operational": 3, 00:11:19.335 "base_bdevs_list": [ 00:11:19.335 { 00:11:19.335 "name": "BaseBdev1", 00:11:19.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.335 "is_configured": false, 00:11:19.335 "data_offset": 0, 00:11:19.335 "data_size": 0 00:11:19.335 }, 00:11:19.335 { 00:11:19.335 "name": null, 00:11:19.335 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:19.335 "is_configured": false, 00:11:19.335 "data_offset": 0, 00:11:19.335 "data_size": 65536 00:11:19.335 }, 00:11:19.335 { 00:11:19.335 "name": "BaseBdev3", 00:11:19.335 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:19.335 "is_configured": true, 00:11:19.335 "data_offset": 0, 00:11:19.335 "data_size": 65536 00:11:19.335 } 00:11:19.335 ] 00:11:19.335 }' 00:11:19.335 10:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.335 10:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.594 10:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.594 10:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.852 10:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:19.852 10:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.110 [2024-06-10 10:15:25.618195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.110 BaseBdev1 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:20.111 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:20.400 10:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.659 [ 00:11:20.659 { 00:11:20.659 "name": "BaseBdev1", 00:11:20.659 "aliases": [ 00:11:20.659 "5ab06ca3-2712-11ef-b084-113036b5c18d" 00:11:20.659 ], 00:11:20.659 "product_name": "Malloc disk", 00:11:20.659 "block_size": 512, 00:11:20.659 "num_blocks": 65536, 00:11:20.659 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:20.659 "assigned_rate_limits": { 00:11:20.659 "rw_ios_per_sec": 0, 00:11:20.659 "rw_mbytes_per_sec": 0, 00:11:20.659 "r_mbytes_per_sec": 0, 00:11:20.659 "w_mbytes_per_sec": 0 00:11:20.659 }, 00:11:20.659 "claimed": true, 00:11:20.659 "claim_type": "exclusive_write", 00:11:20.659 "zoned": false, 00:11:20.659 "supported_io_types": { 00:11:20.659 "read": true, 00:11:20.659 "write": true, 00:11:20.659 "unmap": true, 00:11:20.659 "write_zeroes": true, 00:11:20.659 "flush": true, 00:11:20.659 "reset": true, 00:11:20.659 "compare": false, 00:11:20.659 "compare_and_write": false, 00:11:20.659 "abort": true, 00:11:20.659 "nvme_admin": false, 00:11:20.659 "nvme_io": false 00:11:20.659 }, 00:11:20.659 "memory_domains": [ 00:11:20.659 { 00:11:20.659 "dma_device_id": "system", 00:11:20.659 "dma_device_type": 1 00:11:20.659 }, 00:11:20.659 { 00:11:20.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.659 "dma_device_type": 2 00:11:20.659 } 00:11:20.659 ], 00:11:20.659 "driver_specific": {} 00:11:20.659 } 00:11:20.659 ] 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.659 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.918 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:20.918 "name": "Existed_Raid", 00:11:20.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.918 "strip_size_kb": 0, 00:11:20.918 "state": "configuring", 00:11:20.918 "raid_level": "raid1", 00:11:20.918 "superblock": false, 00:11:20.918 "num_base_bdevs": 3, 00:11:20.918 "num_base_bdevs_discovered": 2, 00:11:20.918 "num_base_bdevs_operational": 3, 00:11:20.918 "base_bdevs_list": [ 00:11:20.918 { 00:11:20.918 "name": "BaseBdev1", 00:11:20.918 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:20.918 "is_configured": true, 00:11:20.918 "data_offset": 0, 00:11:20.918 "data_size": 65536 00:11:20.918 }, 00:11:20.918 { 00:11:20.918 "name": null, 00:11:20.918 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:20.918 "is_configured": false, 00:11:20.918 "data_offset": 0, 00:11:20.918 "data_size": 65536 00:11:20.918 }, 00:11:20.918 { 00:11:20.918 "name": "BaseBdev3", 00:11:20.918 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:20.918 "is_configured": true, 00:11:20.918 "data_offset": 0, 00:11:20.918 "data_size": 65536 00:11:20.918 } 00:11:20.918 ] 00:11:20.918 }' 00:11:20.918 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:20.918 10:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.177 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.177 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.435 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:21.435 10:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:21.724 [2024-06-10 10:15:27.306142] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.983 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.242 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:22.242 "name": "Existed_Raid", 00:11:22.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.242 "strip_size_kb": 0, 00:11:22.242 "state": "configuring", 00:11:22.242 "raid_level": "raid1", 00:11:22.242 "superblock": false, 00:11:22.242 "num_base_bdevs": 3, 00:11:22.242 "num_base_bdevs_discovered": 1, 00:11:22.242 "num_base_bdevs_operational": 3, 00:11:22.242 "base_bdevs_list": [ 00:11:22.242 { 00:11:22.242 "name": "BaseBdev1", 00:11:22.242 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:22.242 "is_configured": true, 00:11:22.242 "data_offset": 0, 00:11:22.242 "data_size": 65536 00:11:22.242 }, 00:11:22.242 { 00:11:22.242 "name": null, 00:11:22.242 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:22.242 "is_configured": false, 00:11:22.242 "data_offset": 0, 00:11:22.242 "data_size": 65536 00:11:22.242 }, 00:11:22.242 { 00:11:22.242 "name": null, 00:11:22.242 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:22.242 "is_configured": false, 00:11:22.242 "data_offset": 0, 00:11:22.242 "data_size": 65536 00:11:22.242 } 00:11:22.242 ] 00:11:22.242 }' 00:11:22.242 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:22.242 10:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.500 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.500 10:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.759 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:22.759 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:23.018 [2024-06-10 10:15:28.446169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.018 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.277 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.277 "name": "Existed_Raid", 00:11:23.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.277 "strip_size_kb": 0, 00:11:23.277 "state": "configuring", 00:11:23.277 "raid_level": "raid1", 00:11:23.277 "superblock": false, 00:11:23.277 "num_base_bdevs": 3, 00:11:23.277 "num_base_bdevs_discovered": 2, 00:11:23.277 "num_base_bdevs_operational": 3, 00:11:23.277 "base_bdevs_list": [ 00:11:23.277 { 00:11:23.277 "name": "BaseBdev1", 00:11:23.277 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:23.277 "is_configured": true, 00:11:23.277 "data_offset": 0, 00:11:23.277 "data_size": 65536 00:11:23.277 }, 00:11:23.277 { 00:11:23.277 "name": null, 00:11:23.277 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:23.277 "is_configured": false, 00:11:23.277 "data_offset": 0, 00:11:23.277 "data_size": 65536 00:11:23.277 }, 00:11:23.277 { 00:11:23.277 "name": "BaseBdev3", 00:11:23.277 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:23.277 "is_configured": true, 00:11:23.277 "data_offset": 0, 00:11:23.277 "data_size": 65536 00:11:23.277 } 00:11:23.277 ] 00:11:23.277 }' 00:11:23.277 10:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.277 10:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.536 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:23.536 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.795 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:23.795 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:24.054 [2024-06-10 10:15:29.546195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.054 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.312 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.312 "name": "Existed_Raid", 00:11:24.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.312 "strip_size_kb": 0, 00:11:24.312 "state": "configuring", 00:11:24.312 "raid_level": "raid1", 00:11:24.312 "superblock": false, 00:11:24.312 "num_base_bdevs": 3, 00:11:24.312 "num_base_bdevs_discovered": 1, 00:11:24.312 "num_base_bdevs_operational": 3, 00:11:24.312 "base_bdevs_list": [ 00:11:24.312 { 00:11:24.312 "name": null, 00:11:24.312 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:24.312 "is_configured": false, 00:11:24.312 "data_offset": 0, 00:11:24.312 "data_size": 65536 00:11:24.312 }, 00:11:24.312 { 00:11:24.312 "name": null, 00:11:24.312 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:24.312 "is_configured": false, 00:11:24.312 "data_offset": 0, 00:11:24.312 "data_size": 65536 00:11:24.312 }, 00:11:24.312 { 00:11:24.312 "name": "BaseBdev3", 00:11:24.312 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:24.312 "is_configured": true, 00:11:24.312 "data_offset": 0, 00:11:24.312 "data_size": 65536 00:11:24.312 } 00:11:24.312 ] 00:11:24.312 }' 00:11:24.312 10:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.312 10:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.877 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.877 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.877 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:24.877 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:25.444 [2024-06-10 10:15:30.767080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.444 10:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.444 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.444 "name": "Existed_Raid", 00:11:25.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.444 "strip_size_kb": 0, 00:11:25.444 "state": "configuring", 00:11:25.444 "raid_level": "raid1", 00:11:25.444 "superblock": false, 00:11:25.444 "num_base_bdevs": 3, 00:11:25.444 "num_base_bdevs_discovered": 2, 00:11:25.444 "num_base_bdevs_operational": 3, 00:11:25.444 "base_bdevs_list": [ 00:11:25.444 { 00:11:25.444 "name": null, 00:11:25.444 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:25.444 "is_configured": false, 00:11:25.444 "data_offset": 0, 00:11:25.444 "data_size": 65536 00:11:25.444 }, 00:11:25.444 { 00:11:25.444 "name": "BaseBdev2", 00:11:25.444 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:25.444 "is_configured": true, 00:11:25.444 "data_offset": 0, 00:11:25.444 "data_size": 65536 00:11:25.444 }, 00:11:25.444 { 00:11:25.444 "name": "BaseBdev3", 00:11:25.444 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:25.444 "is_configured": true, 00:11:25.444 "data_offset": 0, 00:11:25.444 "data_size": 65536 00:11:25.444 } 00:11:25.444 ] 00:11:25.444 }' 00:11:25.444 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.444 10:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.011 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.012 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:26.270 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:26.270 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.270 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:26.576 10:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5ab06ca3-2712-11ef-b084-113036b5c18d 00:11:26.834 [2024-06-10 10:15:32.191237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:26.834 [2024-06-10 10:15:32.191263] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a876f00 00:11:26.834 [2024-06-10 10:15:32.191267] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:26.834 [2024-06-10 10:15:32.191288] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a8d9e20 00:11:26.834 [2024-06-10 10:15:32.191344] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a876f00 00:11:26.834 [2024-06-10 10:15:32.191347] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a876f00 00:11:26.834 [2024-06-10 10:15:32.191375] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.834 NewBaseBdev 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:26.834 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:27.093 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:27.093 [ 00:11:27.093 { 00:11:27.093 "name": "NewBaseBdev", 00:11:27.093 "aliases": [ 00:11:27.093 "5ab06ca3-2712-11ef-b084-113036b5c18d" 00:11:27.093 ], 00:11:27.093 "product_name": "Malloc disk", 00:11:27.093 "block_size": 512, 00:11:27.093 "num_blocks": 65536, 00:11:27.093 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:27.093 "assigned_rate_limits": { 00:11:27.093 "rw_ios_per_sec": 0, 00:11:27.093 "rw_mbytes_per_sec": 0, 00:11:27.093 "r_mbytes_per_sec": 0, 00:11:27.093 "w_mbytes_per_sec": 0 00:11:27.093 }, 00:11:27.093 "claimed": true, 00:11:27.093 "claim_type": "exclusive_write", 00:11:27.093 "zoned": false, 00:11:27.093 "supported_io_types": { 00:11:27.093 "read": true, 00:11:27.093 "write": true, 00:11:27.093 "unmap": true, 00:11:27.093 "write_zeroes": true, 00:11:27.093 "flush": true, 00:11:27.093 "reset": true, 00:11:27.093 "compare": false, 00:11:27.093 "compare_and_write": false, 00:11:27.093 "abort": true, 00:11:27.093 "nvme_admin": false, 00:11:27.093 "nvme_io": false 00:11:27.093 }, 00:11:27.093 "memory_domains": [ 00:11:27.093 { 00:11:27.093 "dma_device_id": "system", 00:11:27.093 "dma_device_type": 1 00:11:27.093 }, 00:11:27.093 { 00:11:27.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.093 "dma_device_type": 2 00:11:27.093 } 00:11:27.093 ], 00:11:27.093 "driver_specific": {} 00:11:27.093 } 00:11:27.093 ] 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.352 10:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.610 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.610 "name": "Existed_Raid", 00:11:27.610 "uuid": "5e9b6998-2712-11ef-b084-113036b5c18d", 00:11:27.610 "strip_size_kb": 0, 00:11:27.610 "state": "online", 00:11:27.610 "raid_level": "raid1", 00:11:27.610 "superblock": false, 00:11:27.610 "num_base_bdevs": 3, 00:11:27.610 "num_base_bdevs_discovered": 3, 00:11:27.610 "num_base_bdevs_operational": 3, 00:11:27.610 "base_bdevs_list": [ 00:11:27.610 { 00:11:27.610 "name": "NewBaseBdev", 00:11:27.610 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:27.610 "is_configured": true, 00:11:27.610 "data_offset": 0, 00:11:27.610 "data_size": 65536 00:11:27.610 }, 00:11:27.610 { 00:11:27.610 "name": "BaseBdev2", 00:11:27.610 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:27.610 "is_configured": true, 00:11:27.610 "data_offset": 0, 00:11:27.610 "data_size": 65536 00:11:27.610 }, 00:11:27.610 { 00:11:27.610 "name": "BaseBdev3", 00:11:27.610 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:27.610 "is_configured": true, 00:11:27.610 "data_offset": 0, 00:11:27.610 "data_size": 65536 00:11:27.610 } 00:11:27.610 ] 00:11:27.610 }' 00:11:27.610 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.610 10:15:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:27.869 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:28.138 [2024-06-10 10:15:33.635217] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.138 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:28.138 "name": "Existed_Raid", 00:11:28.138 "aliases": [ 00:11:28.138 "5e9b6998-2712-11ef-b084-113036b5c18d" 00:11:28.138 ], 00:11:28.138 "product_name": "Raid Volume", 00:11:28.138 "block_size": 512, 00:11:28.138 "num_blocks": 65536, 00:11:28.138 "uuid": "5e9b6998-2712-11ef-b084-113036b5c18d", 00:11:28.138 "assigned_rate_limits": { 00:11:28.138 "rw_ios_per_sec": 0, 00:11:28.138 "rw_mbytes_per_sec": 0, 00:11:28.138 "r_mbytes_per_sec": 0, 00:11:28.138 "w_mbytes_per_sec": 0 00:11:28.138 }, 00:11:28.138 "claimed": false, 00:11:28.138 "zoned": false, 00:11:28.138 "supported_io_types": { 00:11:28.138 "read": true, 00:11:28.138 "write": true, 00:11:28.138 "unmap": false, 00:11:28.138 "write_zeroes": true, 00:11:28.138 "flush": false, 00:11:28.138 "reset": true, 00:11:28.138 "compare": false, 00:11:28.138 "compare_and_write": false, 00:11:28.138 "abort": false, 00:11:28.138 "nvme_admin": false, 00:11:28.138 "nvme_io": false 00:11:28.138 }, 00:11:28.138 "memory_domains": [ 00:11:28.138 { 00:11:28.138 "dma_device_id": "system", 00:11:28.138 "dma_device_type": 1 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.138 "dma_device_type": 2 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "dma_device_id": "system", 00:11:28.138 "dma_device_type": 1 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.138 "dma_device_type": 2 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "dma_device_id": "system", 00:11:28.138 "dma_device_type": 1 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.138 "dma_device_type": 2 00:11:28.138 } 00:11:28.138 ], 00:11:28.138 "driver_specific": { 00:11:28.138 "raid": { 00:11:28.138 "uuid": "5e9b6998-2712-11ef-b084-113036b5c18d", 00:11:28.138 "strip_size_kb": 0, 00:11:28.138 "state": "online", 00:11:28.138 "raid_level": "raid1", 00:11:28.138 "superblock": false, 00:11:28.138 "num_base_bdevs": 3, 00:11:28.138 "num_base_bdevs_discovered": 3, 00:11:28.138 "num_base_bdevs_operational": 3, 00:11:28.138 "base_bdevs_list": [ 00:11:28.138 { 00:11:28.138 "name": "NewBaseBdev", 00:11:28.138 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:28.138 "is_configured": true, 00:11:28.138 "data_offset": 0, 00:11:28.138 "data_size": 65536 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "name": "BaseBdev2", 00:11:28.138 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:28.138 "is_configured": true, 00:11:28.138 "data_offset": 0, 00:11:28.138 "data_size": 65536 00:11:28.138 }, 00:11:28.138 { 00:11:28.138 "name": "BaseBdev3", 00:11:28.138 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:28.138 "is_configured": true, 00:11:28.138 "data_offset": 0, 00:11:28.139 "data_size": 65536 00:11:28.139 } 00:11:28.139 ] 00:11:28.139 } 00:11:28.139 } 00:11:28.139 }' 00:11:28.139 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.139 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:28.139 BaseBdev2 00:11:28.139 BaseBdev3' 00:11:28.139 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.139 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.139 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.399 "name": "NewBaseBdev", 00:11:28.399 "aliases": [ 00:11:28.399 "5ab06ca3-2712-11ef-b084-113036b5c18d" 00:11:28.399 ], 00:11:28.399 "product_name": "Malloc disk", 00:11:28.399 "block_size": 512, 00:11:28.399 "num_blocks": 65536, 00:11:28.399 "uuid": "5ab06ca3-2712-11ef-b084-113036b5c18d", 00:11:28.399 "assigned_rate_limits": { 00:11:28.399 "rw_ios_per_sec": 0, 00:11:28.399 "rw_mbytes_per_sec": 0, 00:11:28.399 "r_mbytes_per_sec": 0, 00:11:28.399 "w_mbytes_per_sec": 0 00:11:28.399 }, 00:11:28.399 "claimed": true, 00:11:28.399 "claim_type": "exclusive_write", 00:11:28.399 "zoned": false, 00:11:28.399 "supported_io_types": { 00:11:28.399 "read": true, 00:11:28.399 "write": true, 00:11:28.399 "unmap": true, 00:11:28.399 "write_zeroes": true, 00:11:28.399 "flush": true, 00:11:28.399 "reset": true, 00:11:28.399 "compare": false, 00:11:28.399 "compare_and_write": false, 00:11:28.399 "abort": true, 00:11:28.399 "nvme_admin": false, 00:11:28.399 "nvme_io": false 00:11:28.399 }, 00:11:28.399 "memory_domains": [ 00:11:28.399 { 00:11:28.399 "dma_device_id": "system", 00:11:28.399 "dma_device_type": 1 00:11:28.399 }, 00:11:28.399 { 00:11:28.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.399 "dma_device_type": 2 00:11:28.399 } 00:11:28.399 ], 00:11:28.399 "driver_specific": {} 00:11:28.399 }' 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:28.399 10:15:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.657 "name": "BaseBdev2", 00:11:28.657 "aliases": [ 00:11:28.657 "589268d0-2712-11ef-b084-113036b5c18d" 00:11:28.657 ], 00:11:28.657 "product_name": "Malloc disk", 00:11:28.657 "block_size": 512, 00:11:28.657 "num_blocks": 65536, 00:11:28.657 "uuid": "589268d0-2712-11ef-b084-113036b5c18d", 00:11:28.657 "assigned_rate_limits": { 00:11:28.657 "rw_ios_per_sec": 0, 00:11:28.657 "rw_mbytes_per_sec": 0, 00:11:28.657 "r_mbytes_per_sec": 0, 00:11:28.657 "w_mbytes_per_sec": 0 00:11:28.657 }, 00:11:28.657 "claimed": true, 00:11:28.657 "claim_type": "exclusive_write", 00:11:28.657 "zoned": false, 00:11:28.657 "supported_io_types": { 00:11:28.657 "read": true, 00:11:28.657 "write": true, 00:11:28.657 "unmap": true, 00:11:28.657 "write_zeroes": true, 00:11:28.657 "flush": true, 00:11:28.657 "reset": true, 00:11:28.657 "compare": false, 00:11:28.657 "compare_and_write": false, 00:11:28.657 "abort": true, 00:11:28.657 "nvme_admin": false, 00:11:28.657 "nvme_io": false 00:11:28.657 }, 00:11:28.657 "memory_domains": [ 00:11:28.657 { 00:11:28.657 "dma_device_id": "system", 00:11:28.657 "dma_device_type": 1 00:11:28.657 }, 00:11:28.657 { 00:11:28.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.657 "dma_device_type": 2 00:11:28.657 } 00:11:28.657 ], 00:11:28.657 "driver_specific": {} 00:11:28.657 }' 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:28.657 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:28.923 "name": "BaseBdev3", 00:11:28.923 "aliases": [ 00:11:28.923 "58fc0080-2712-11ef-b084-113036b5c18d" 00:11:28.923 ], 00:11:28.923 "product_name": "Malloc disk", 00:11:28.923 "block_size": 512, 00:11:28.923 "num_blocks": 65536, 00:11:28.923 "uuid": "58fc0080-2712-11ef-b084-113036b5c18d", 00:11:28.923 "assigned_rate_limits": { 00:11:28.923 "rw_ios_per_sec": 0, 00:11:28.923 "rw_mbytes_per_sec": 0, 00:11:28.923 "r_mbytes_per_sec": 0, 00:11:28.923 "w_mbytes_per_sec": 0 00:11:28.923 }, 00:11:28.923 "claimed": true, 00:11:28.923 "claim_type": "exclusive_write", 00:11:28.923 "zoned": false, 00:11:28.923 "supported_io_types": { 00:11:28.923 "read": true, 00:11:28.923 "write": true, 00:11:28.923 "unmap": true, 00:11:28.923 "write_zeroes": true, 00:11:28.923 "flush": true, 00:11:28.923 "reset": true, 00:11:28.923 "compare": false, 00:11:28.923 "compare_and_write": false, 00:11:28.923 "abort": true, 00:11:28.923 "nvme_admin": false, 00:11:28.923 "nvme_io": false 00:11:28.923 }, 00:11:28.923 "memory_domains": [ 00:11:28.923 { 00:11:28.923 "dma_device_id": "system", 00:11:28.923 "dma_device_type": 1 00:11:28.923 }, 00:11:28.923 { 00:11:28.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.923 "dma_device_type": 2 00:11:28.923 } 00:11:28.923 ], 00:11:28.923 "driver_specific": {} 00:11:28.923 }' 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:28.923 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:29.181 [2024-06-10 10:15:34.739227] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.181 [2024-06-10 10:15:34.739256] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.181 [2024-06-10 10:15:34.739278] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.181 [2024-06-10 10:15:34.739342] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.181 [2024-06-10 10:15:34.739346] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a876f00 name Existed_Raid, state offline 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56862 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 56862 ']' 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 56862 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 56862 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:11:29.182 killing process with pid 56862 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 56862' 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 56862 00:11:29.182 [2024-06-10 10:15:34.777013] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.182 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 56862 00:11:29.441 [2024-06-10 10:15:34.792244] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.441 10:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:29.441 00:11:29.441 real 0m24.631s 00:11:29.441 user 0m45.310s 00:11:29.441 sys 0m3.199s 00:11:29.441 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:29.441 10:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.441 ************************************ 00:11:29.441 END TEST raid_state_function_test 00:11:29.441 ************************************ 00:11:29.441 10:15:35 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:29.441 10:15:35 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:11:29.441 10:15:35 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:29.441 10:15:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.441 ************************************ 00:11:29.441 START TEST raid_state_function_test_sb 00:11:29.441 ************************************ 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 3 true 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=57591 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 57591' 00:11:29.441 Process raid pid: 57591 00:11:29.441 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 57591 /var/tmp/spdk-raid.sock 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 57591 ']' 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:29.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:29.442 10:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.442 [2024-06-10 10:15:35.029554] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:29.442 [2024-06-10 10:15:35.029788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:30.010 EAL: TSC is not safe to use in SMP mode 00:11:30.010 EAL: TSC is not invariant 00:11:30.010 [2024-06-10 10:15:35.527557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.010 [2024-06-10 10:15:35.611568] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:30.010 [2024-06-10 10:15:35.613736] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.268 [2024-06-10 10:15:35.614461] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.268 [2024-06-10 10:15:35.614473] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.835 10:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:30.835 10:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:11:30.835 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:31.094 [2024-06-10 10:15:36.461392] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.094 [2024-06-10 10:15:36.461441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.094 [2024-06-10 10:15:36.461446] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.094 [2024-06-10 10:15:36.461455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.094 [2024-06-10 10:15:36.461458] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.094 [2024-06-10 10:15:36.461465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.094 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.352 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.352 "name": "Existed_Raid", 00:11:31.352 "uuid": "6126fadb-2712-11ef-b084-113036b5c18d", 00:11:31.352 "strip_size_kb": 0, 00:11:31.352 "state": "configuring", 00:11:31.352 "raid_level": "raid1", 00:11:31.352 "superblock": true, 00:11:31.352 "num_base_bdevs": 3, 00:11:31.352 "num_base_bdevs_discovered": 0, 00:11:31.352 "num_base_bdevs_operational": 3, 00:11:31.352 "base_bdevs_list": [ 00:11:31.352 { 00:11:31.352 "name": "BaseBdev1", 00:11:31.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.352 "is_configured": false, 00:11:31.352 "data_offset": 0, 00:11:31.352 "data_size": 0 00:11:31.352 }, 00:11:31.352 { 00:11:31.352 "name": "BaseBdev2", 00:11:31.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.352 "is_configured": false, 00:11:31.352 "data_offset": 0, 00:11:31.352 "data_size": 0 00:11:31.352 }, 00:11:31.352 { 00:11:31.352 "name": "BaseBdev3", 00:11:31.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.352 "is_configured": false, 00:11:31.352 "data_offset": 0, 00:11:31.352 "data_size": 0 00:11:31.352 } 00:11:31.352 ] 00:11:31.352 }' 00:11:31.352 10:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.352 10:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.654 10:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:31.978 [2024-06-10 10:15:37.405399] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.978 [2024-06-10 10:15:37.405427] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ac14500 name Existed_Raid, state configuring 00:11:31.978 10:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:32.237 [2024-06-10 10:15:37.681413] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.237 [2024-06-10 10:15:37.681466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.237 [2024-06-10 10:15:37.681470] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.237 [2024-06-10 10:15:37.681479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.237 [2024-06-10 10:15:37.681482] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.237 [2024-06-10 10:15:37.681490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.237 10:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.495 [2024-06-10 10:15:37.994326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.495 BaseBdev1 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:32.495 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:32.754 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.321 [ 00:11:33.321 { 00:11:33.321 "name": "BaseBdev1", 00:11:33.321 "aliases": [ 00:11:33.321 "6210bf63-2712-11ef-b084-113036b5c18d" 00:11:33.321 ], 00:11:33.321 "product_name": "Malloc disk", 00:11:33.321 "block_size": 512, 00:11:33.321 "num_blocks": 65536, 00:11:33.321 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:33.321 "assigned_rate_limits": { 00:11:33.321 "rw_ios_per_sec": 0, 00:11:33.321 "rw_mbytes_per_sec": 0, 00:11:33.321 "r_mbytes_per_sec": 0, 00:11:33.321 "w_mbytes_per_sec": 0 00:11:33.321 }, 00:11:33.321 "claimed": true, 00:11:33.322 "claim_type": "exclusive_write", 00:11:33.322 "zoned": false, 00:11:33.322 "supported_io_types": { 00:11:33.322 "read": true, 00:11:33.322 "write": true, 00:11:33.322 "unmap": true, 00:11:33.322 "write_zeroes": true, 00:11:33.322 "flush": true, 00:11:33.322 "reset": true, 00:11:33.322 "compare": false, 00:11:33.322 "compare_and_write": false, 00:11:33.322 "abort": true, 00:11:33.322 "nvme_admin": false, 00:11:33.322 "nvme_io": false 00:11:33.322 }, 00:11:33.322 "memory_domains": [ 00:11:33.322 { 00:11:33.322 "dma_device_id": "system", 00:11:33.322 "dma_device_type": 1 00:11:33.322 }, 00:11:33.322 { 00:11:33.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.322 "dma_device_type": 2 00:11:33.322 } 00:11:33.322 ], 00:11:33.322 "driver_specific": {} 00:11:33.322 } 00:11:33.322 ] 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.322 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.580 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:33.580 "name": "Existed_Raid", 00:11:33.580 "uuid": "61e123f6-2712-11ef-b084-113036b5c18d", 00:11:33.580 "strip_size_kb": 0, 00:11:33.580 "state": "configuring", 00:11:33.580 "raid_level": "raid1", 00:11:33.580 "superblock": true, 00:11:33.580 "num_base_bdevs": 3, 00:11:33.580 "num_base_bdevs_discovered": 1, 00:11:33.580 "num_base_bdevs_operational": 3, 00:11:33.580 "base_bdevs_list": [ 00:11:33.580 { 00:11:33.580 "name": "BaseBdev1", 00:11:33.580 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:33.580 "is_configured": true, 00:11:33.580 "data_offset": 2048, 00:11:33.580 "data_size": 63488 00:11:33.580 }, 00:11:33.580 { 00:11:33.580 "name": "BaseBdev2", 00:11:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.580 "is_configured": false, 00:11:33.580 "data_offset": 0, 00:11:33.580 "data_size": 0 00:11:33.580 }, 00:11:33.580 { 00:11:33.580 "name": "BaseBdev3", 00:11:33.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.580 "is_configured": false, 00:11:33.580 "data_offset": 0, 00:11:33.580 "data_size": 0 00:11:33.580 } 00:11:33.580 ] 00:11:33.580 }' 00:11:33.580 10:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:33.580 10:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.838 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:34.096 [2024-06-10 10:15:39.506674] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.096 [2024-06-10 10:15:39.506710] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ac14500 name Existed_Raid, state configuring 00:11:34.096 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:34.355 [2024-06-10 10:15:39.794681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.355 [2024-06-10 10:15:39.795375] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.355 [2024-06-10 10:15:39.795415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.355 [2024-06-10 10:15:39.795420] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.355 [2024-06-10 10:15:39.795428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.355 10:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.613 10:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.613 "name": "Existed_Raid", 00:11:34.613 "uuid": "63239968-2712-11ef-b084-113036b5c18d", 00:11:34.613 "strip_size_kb": 0, 00:11:34.613 "state": "configuring", 00:11:34.613 "raid_level": "raid1", 00:11:34.613 "superblock": true, 00:11:34.613 "num_base_bdevs": 3, 00:11:34.613 "num_base_bdevs_discovered": 1, 00:11:34.613 "num_base_bdevs_operational": 3, 00:11:34.613 "base_bdevs_list": [ 00:11:34.613 { 00:11:34.613 "name": "BaseBdev1", 00:11:34.613 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:34.613 "is_configured": true, 00:11:34.613 "data_offset": 2048, 00:11:34.613 "data_size": 63488 00:11:34.613 }, 00:11:34.613 { 00:11:34.613 "name": "BaseBdev2", 00:11:34.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.613 "is_configured": false, 00:11:34.613 "data_offset": 0, 00:11:34.613 "data_size": 0 00:11:34.613 }, 00:11:34.613 { 00:11:34.613 "name": "BaseBdev3", 00:11:34.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.613 "is_configured": false, 00:11:34.613 "data_offset": 0, 00:11:34.613 "data_size": 0 00:11:34.613 } 00:11:34.613 ] 00:11:34.613 }' 00:11:34.613 10:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.613 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.181 10:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.181 [2024-06-10 10:15:40.782918] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.439 BaseBdev2 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:35.439 10:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.697 10:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.956 [ 00:11:35.956 { 00:11:35.956 "name": "BaseBdev2", 00:11:35.956 "aliases": [ 00:11:35.956 "63ba5ee9-2712-11ef-b084-113036b5c18d" 00:11:35.956 ], 00:11:35.956 "product_name": "Malloc disk", 00:11:35.956 "block_size": 512, 00:11:35.956 "num_blocks": 65536, 00:11:35.956 "uuid": "63ba5ee9-2712-11ef-b084-113036b5c18d", 00:11:35.956 "assigned_rate_limits": { 00:11:35.956 "rw_ios_per_sec": 0, 00:11:35.956 "rw_mbytes_per_sec": 0, 00:11:35.956 "r_mbytes_per_sec": 0, 00:11:35.956 "w_mbytes_per_sec": 0 00:11:35.956 }, 00:11:35.956 "claimed": true, 00:11:35.956 "claim_type": "exclusive_write", 00:11:35.956 "zoned": false, 00:11:35.956 "supported_io_types": { 00:11:35.956 "read": true, 00:11:35.956 "write": true, 00:11:35.956 "unmap": true, 00:11:35.956 "write_zeroes": true, 00:11:35.956 "flush": true, 00:11:35.956 "reset": true, 00:11:35.956 "compare": false, 00:11:35.956 "compare_and_write": false, 00:11:35.956 "abort": true, 00:11:35.956 "nvme_admin": false, 00:11:35.956 "nvme_io": false 00:11:35.956 }, 00:11:35.956 "memory_domains": [ 00:11:35.956 { 00:11:35.956 "dma_device_id": "system", 00:11:35.956 "dma_device_type": 1 00:11:35.956 }, 00:11:35.956 { 00:11:35.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.956 "dma_device_type": 2 00:11:35.956 } 00:11:35.956 ], 00:11:35.956 "driver_specific": {} 00:11:35.956 } 00:11:35.956 ] 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.956 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.216 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.216 "name": "Existed_Raid", 00:11:36.216 "uuid": "63239968-2712-11ef-b084-113036b5c18d", 00:11:36.216 "strip_size_kb": 0, 00:11:36.216 "state": "configuring", 00:11:36.216 "raid_level": "raid1", 00:11:36.216 "superblock": true, 00:11:36.216 "num_base_bdevs": 3, 00:11:36.216 "num_base_bdevs_discovered": 2, 00:11:36.216 "num_base_bdevs_operational": 3, 00:11:36.216 "base_bdevs_list": [ 00:11:36.216 { 00:11:36.216 "name": "BaseBdev1", 00:11:36.216 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:36.216 "is_configured": true, 00:11:36.217 "data_offset": 2048, 00:11:36.217 "data_size": 63488 00:11:36.217 }, 00:11:36.217 { 00:11:36.217 "name": "BaseBdev2", 00:11:36.217 "uuid": "63ba5ee9-2712-11ef-b084-113036b5c18d", 00:11:36.217 "is_configured": true, 00:11:36.217 "data_offset": 2048, 00:11:36.217 "data_size": 63488 00:11:36.217 }, 00:11:36.217 { 00:11:36.217 "name": "BaseBdev3", 00:11:36.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.217 "is_configured": false, 00:11:36.217 "data_offset": 0, 00:11:36.217 "data_size": 0 00:11:36.217 } 00:11:36.217 ] 00:11:36.217 }' 00:11:36.217 10:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.217 10:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.523 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.781 [2024-06-10 10:15:42.314987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.781 [2024-06-10 10:15:42.315066] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ac14a00 00:11:36.781 [2024-06-10 10:15:42.315077] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.781 [2024-06-10 10:15:42.315137] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac77ec0 00:11:36.781 [2024-06-10 10:15:42.315201] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ac14a00 00:11:36.781 [2024-06-10 10:15:42.315211] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ac14a00 00:11:36.781 [2024-06-10 10:15:42.315242] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.781 BaseBdev3 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:36.781 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:37.039 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.605 [ 00:11:37.605 { 00:11:37.605 "name": "BaseBdev3", 00:11:37.605 "aliases": [ 00:11:37.605 "64a4257c-2712-11ef-b084-113036b5c18d" 00:11:37.605 ], 00:11:37.605 "product_name": "Malloc disk", 00:11:37.605 "block_size": 512, 00:11:37.605 "num_blocks": 65536, 00:11:37.605 "uuid": "64a4257c-2712-11ef-b084-113036b5c18d", 00:11:37.605 "assigned_rate_limits": { 00:11:37.605 "rw_ios_per_sec": 0, 00:11:37.605 "rw_mbytes_per_sec": 0, 00:11:37.605 "r_mbytes_per_sec": 0, 00:11:37.605 "w_mbytes_per_sec": 0 00:11:37.605 }, 00:11:37.605 "claimed": true, 00:11:37.605 "claim_type": "exclusive_write", 00:11:37.605 "zoned": false, 00:11:37.605 "supported_io_types": { 00:11:37.605 "read": true, 00:11:37.605 "write": true, 00:11:37.605 "unmap": true, 00:11:37.605 "write_zeroes": true, 00:11:37.605 "flush": true, 00:11:37.605 "reset": true, 00:11:37.605 "compare": false, 00:11:37.605 "compare_and_write": false, 00:11:37.605 "abort": true, 00:11:37.605 "nvme_admin": false, 00:11:37.605 "nvme_io": false 00:11:37.605 }, 00:11:37.605 "memory_domains": [ 00:11:37.605 { 00:11:37.605 "dma_device_id": "system", 00:11:37.605 "dma_device_type": 1 00:11:37.605 }, 00:11:37.605 { 00:11:37.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.605 "dma_device_type": 2 00:11:37.605 } 00:11:37.605 ], 00:11:37.605 "driver_specific": {} 00:11:37.605 } 00:11:37.605 ] 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:37.605 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:37.606 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.606 10:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.863 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.863 "name": "Existed_Raid", 00:11:37.863 "uuid": "63239968-2712-11ef-b084-113036b5c18d", 00:11:37.863 "strip_size_kb": 0, 00:11:37.863 "state": "online", 00:11:37.863 "raid_level": "raid1", 00:11:37.863 "superblock": true, 00:11:37.863 "num_base_bdevs": 3, 00:11:37.863 "num_base_bdevs_discovered": 3, 00:11:37.863 "num_base_bdevs_operational": 3, 00:11:37.863 "base_bdevs_list": [ 00:11:37.863 { 00:11:37.863 "name": "BaseBdev1", 00:11:37.863 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:37.863 "is_configured": true, 00:11:37.863 "data_offset": 2048, 00:11:37.863 "data_size": 63488 00:11:37.863 }, 00:11:37.863 { 00:11:37.863 "name": "BaseBdev2", 00:11:37.863 "uuid": "63ba5ee9-2712-11ef-b084-113036b5c18d", 00:11:37.863 "is_configured": true, 00:11:37.863 "data_offset": 2048, 00:11:37.863 "data_size": 63488 00:11:37.863 }, 00:11:37.863 { 00:11:37.863 "name": "BaseBdev3", 00:11:37.863 "uuid": "64a4257c-2712-11ef-b084-113036b5c18d", 00:11:37.863 "is_configured": true, 00:11:37.863 "data_offset": 2048, 00:11:37.863 "data_size": 63488 00:11:37.863 } 00:11:37.863 ] 00:11:37.863 }' 00:11:37.863 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.863 10:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:38.122 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:38.380 [2024-06-10 10:15:43.838908] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.380 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:38.380 "name": "Existed_Raid", 00:11:38.380 "aliases": [ 00:11:38.380 "63239968-2712-11ef-b084-113036b5c18d" 00:11:38.380 ], 00:11:38.380 "product_name": "Raid Volume", 00:11:38.380 "block_size": 512, 00:11:38.380 "num_blocks": 63488, 00:11:38.380 "uuid": "63239968-2712-11ef-b084-113036b5c18d", 00:11:38.380 "assigned_rate_limits": { 00:11:38.380 "rw_ios_per_sec": 0, 00:11:38.380 "rw_mbytes_per_sec": 0, 00:11:38.380 "r_mbytes_per_sec": 0, 00:11:38.380 "w_mbytes_per_sec": 0 00:11:38.380 }, 00:11:38.380 "claimed": false, 00:11:38.380 "zoned": false, 00:11:38.380 "supported_io_types": { 00:11:38.380 "read": true, 00:11:38.380 "write": true, 00:11:38.380 "unmap": false, 00:11:38.380 "write_zeroes": true, 00:11:38.380 "flush": false, 00:11:38.381 "reset": true, 00:11:38.381 "compare": false, 00:11:38.381 "compare_and_write": false, 00:11:38.381 "abort": false, 00:11:38.381 "nvme_admin": false, 00:11:38.381 "nvme_io": false 00:11:38.381 }, 00:11:38.381 "memory_domains": [ 00:11:38.381 { 00:11:38.381 "dma_device_id": "system", 00:11:38.381 "dma_device_type": 1 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.381 "dma_device_type": 2 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "dma_device_id": "system", 00:11:38.381 "dma_device_type": 1 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.381 "dma_device_type": 2 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "dma_device_id": "system", 00:11:38.381 "dma_device_type": 1 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.381 "dma_device_type": 2 00:11:38.381 } 00:11:38.381 ], 00:11:38.381 "driver_specific": { 00:11:38.381 "raid": { 00:11:38.381 "uuid": "63239968-2712-11ef-b084-113036b5c18d", 00:11:38.381 "strip_size_kb": 0, 00:11:38.381 "state": "online", 00:11:38.381 "raid_level": "raid1", 00:11:38.381 "superblock": true, 00:11:38.381 "num_base_bdevs": 3, 00:11:38.381 "num_base_bdevs_discovered": 3, 00:11:38.381 "num_base_bdevs_operational": 3, 00:11:38.381 "base_bdevs_list": [ 00:11:38.381 { 00:11:38.381 "name": "BaseBdev1", 00:11:38.381 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:38.381 "is_configured": true, 00:11:38.381 "data_offset": 2048, 00:11:38.381 "data_size": 63488 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "name": "BaseBdev2", 00:11:38.381 "uuid": "63ba5ee9-2712-11ef-b084-113036b5c18d", 00:11:38.381 "is_configured": true, 00:11:38.381 "data_offset": 2048, 00:11:38.381 "data_size": 63488 00:11:38.381 }, 00:11:38.381 { 00:11:38.381 "name": "BaseBdev3", 00:11:38.381 "uuid": "64a4257c-2712-11ef-b084-113036b5c18d", 00:11:38.381 "is_configured": true, 00:11:38.381 "data_offset": 2048, 00:11:38.381 "data_size": 63488 00:11:38.381 } 00:11:38.381 ] 00:11:38.381 } 00:11:38.381 } 00:11:38.381 }' 00:11:38.381 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.381 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:38.381 BaseBdev2 00:11:38.381 BaseBdev3' 00:11:38.381 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.381 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:38.381 10:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:38.640 "name": "BaseBdev1", 00:11:38.640 "aliases": [ 00:11:38.640 "6210bf63-2712-11ef-b084-113036b5c18d" 00:11:38.640 ], 00:11:38.640 "product_name": "Malloc disk", 00:11:38.640 "block_size": 512, 00:11:38.640 "num_blocks": 65536, 00:11:38.640 "uuid": "6210bf63-2712-11ef-b084-113036b5c18d", 00:11:38.640 "assigned_rate_limits": { 00:11:38.640 "rw_ios_per_sec": 0, 00:11:38.640 "rw_mbytes_per_sec": 0, 00:11:38.640 "r_mbytes_per_sec": 0, 00:11:38.640 "w_mbytes_per_sec": 0 00:11:38.640 }, 00:11:38.640 "claimed": true, 00:11:38.640 "claim_type": "exclusive_write", 00:11:38.640 "zoned": false, 00:11:38.640 "supported_io_types": { 00:11:38.640 "read": true, 00:11:38.640 "write": true, 00:11:38.640 "unmap": true, 00:11:38.640 "write_zeroes": true, 00:11:38.640 "flush": true, 00:11:38.640 "reset": true, 00:11:38.640 "compare": false, 00:11:38.640 "compare_and_write": false, 00:11:38.640 "abort": true, 00:11:38.640 "nvme_admin": false, 00:11:38.640 "nvme_io": false 00:11:38.640 }, 00:11:38.640 "memory_domains": [ 00:11:38.640 { 00:11:38.640 "dma_device_id": "system", 00:11:38.640 "dma_device_type": 1 00:11:38.640 }, 00:11:38.640 { 00:11:38.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.640 "dma_device_type": 2 00:11:38.640 } 00:11:38.640 ], 00:11:38.640 "driver_specific": {} 00:11:38.640 }' 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:38.640 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:39.207 "name": "BaseBdev2", 00:11:39.207 "aliases": [ 00:11:39.207 "63ba5ee9-2712-11ef-b084-113036b5c18d" 00:11:39.207 ], 00:11:39.207 "product_name": "Malloc disk", 00:11:39.207 "block_size": 512, 00:11:39.207 "num_blocks": 65536, 00:11:39.207 "uuid": "63ba5ee9-2712-11ef-b084-113036b5c18d", 00:11:39.207 "assigned_rate_limits": { 00:11:39.207 "rw_ios_per_sec": 0, 00:11:39.207 "rw_mbytes_per_sec": 0, 00:11:39.207 "r_mbytes_per_sec": 0, 00:11:39.207 "w_mbytes_per_sec": 0 00:11:39.207 }, 00:11:39.207 "claimed": true, 00:11:39.207 "claim_type": "exclusive_write", 00:11:39.207 "zoned": false, 00:11:39.207 "supported_io_types": { 00:11:39.207 "read": true, 00:11:39.207 "write": true, 00:11:39.207 "unmap": true, 00:11:39.207 "write_zeroes": true, 00:11:39.207 "flush": true, 00:11:39.207 "reset": true, 00:11:39.207 "compare": false, 00:11:39.207 "compare_and_write": false, 00:11:39.207 "abort": true, 00:11:39.207 "nvme_admin": false, 00:11:39.207 "nvme_io": false 00:11:39.207 }, 00:11:39.207 "memory_domains": [ 00:11:39.207 { 00:11:39.207 "dma_device_id": "system", 00:11:39.207 "dma_device_type": 1 00:11:39.207 }, 00:11:39.207 { 00:11:39.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.207 "dma_device_type": 2 00:11:39.207 } 00:11:39.207 ], 00:11:39.207 "driver_specific": {} 00:11:39.207 }' 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:39.207 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:39.474 "name": "BaseBdev3", 00:11:39.474 "aliases": [ 00:11:39.474 "64a4257c-2712-11ef-b084-113036b5c18d" 00:11:39.474 ], 00:11:39.474 "product_name": "Malloc disk", 00:11:39.474 "block_size": 512, 00:11:39.474 "num_blocks": 65536, 00:11:39.474 "uuid": "64a4257c-2712-11ef-b084-113036b5c18d", 00:11:39.474 "assigned_rate_limits": { 00:11:39.474 "rw_ios_per_sec": 0, 00:11:39.474 "rw_mbytes_per_sec": 0, 00:11:39.474 "r_mbytes_per_sec": 0, 00:11:39.474 "w_mbytes_per_sec": 0 00:11:39.474 }, 00:11:39.474 "claimed": true, 00:11:39.474 "claim_type": "exclusive_write", 00:11:39.474 "zoned": false, 00:11:39.474 "supported_io_types": { 00:11:39.474 "read": true, 00:11:39.474 "write": true, 00:11:39.474 "unmap": true, 00:11:39.474 "write_zeroes": true, 00:11:39.474 "flush": true, 00:11:39.474 "reset": true, 00:11:39.474 "compare": false, 00:11:39.474 "compare_and_write": false, 00:11:39.474 "abort": true, 00:11:39.474 "nvme_admin": false, 00:11:39.474 "nvme_io": false 00:11:39.474 }, 00:11:39.474 "memory_domains": [ 00:11:39.474 { 00:11:39.474 "dma_device_id": "system", 00:11:39.474 "dma_device_type": 1 00:11:39.474 }, 00:11:39.474 { 00:11:39.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.474 "dma_device_type": 2 00:11:39.474 } 00:11:39.474 ], 00:11:39.474 "driver_specific": {} 00:11:39.474 }' 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:39.474 10:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:39.733 [2024-06-10 10:15:45.270930] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.733 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.990 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.990 "name": "Existed_Raid", 00:11:39.990 "uuid": "63239968-2712-11ef-b084-113036b5c18d", 00:11:39.990 "strip_size_kb": 0, 00:11:39.990 "state": "online", 00:11:39.990 "raid_level": "raid1", 00:11:39.990 "superblock": true, 00:11:39.990 "num_base_bdevs": 3, 00:11:39.990 "num_base_bdevs_discovered": 2, 00:11:39.990 "num_base_bdevs_operational": 2, 00:11:39.990 "base_bdevs_list": [ 00:11:39.990 { 00:11:39.990 "name": null, 00:11:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.990 "is_configured": false, 00:11:39.990 "data_offset": 2048, 00:11:39.991 "data_size": 63488 00:11:39.991 }, 00:11:39.991 { 00:11:39.991 "name": "BaseBdev2", 00:11:39.991 "uuid": "63ba5ee9-2712-11ef-b084-113036b5c18d", 00:11:39.991 "is_configured": true, 00:11:39.991 "data_offset": 2048, 00:11:39.991 "data_size": 63488 00:11:39.991 }, 00:11:39.991 { 00:11:39.991 "name": "BaseBdev3", 00:11:39.991 "uuid": "64a4257c-2712-11ef-b084-113036b5c18d", 00:11:39.991 "is_configured": true, 00:11:39.991 "data_offset": 2048, 00:11:39.991 "data_size": 63488 00:11:39.991 } 00:11:39.991 ] 00:11:39.991 }' 00:11:39.991 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.991 10:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.555 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:40.555 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:40.555 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.555 10:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:40.814 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:40.814 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.814 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:41.073 [2024-06-10 10:15:46.523739] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:41.073 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:41.073 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:41.073 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.073 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:41.331 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:41.331 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:41.331 10:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:41.589 [2024-06-10 10:15:47.108594] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.589 [2024-06-10 10:15:47.108628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.589 [2024-06-10 10:15:47.113451] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.589 [2024-06-10 10:15:47.113464] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.589 [2024-06-10 10:15:47.113468] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ac14a00 name Existed_Raid, state offline 00:11:41.589 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:41.589 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:41.589 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.589 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:41.847 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:41.847 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:41.847 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:41.847 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:41.847 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:41.847 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.105 BaseBdev2 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:42.105 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:42.362 10:15:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.621 [ 00:11:42.621 { 00:11:42.621 "name": "BaseBdev2", 00:11:42.621 "aliases": [ 00:11:42.621 "67d188da-2712-11ef-b084-113036b5c18d" 00:11:42.621 ], 00:11:42.621 "product_name": "Malloc disk", 00:11:42.621 "block_size": 512, 00:11:42.621 "num_blocks": 65536, 00:11:42.621 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:42.621 "assigned_rate_limits": { 00:11:42.621 "rw_ios_per_sec": 0, 00:11:42.621 "rw_mbytes_per_sec": 0, 00:11:42.621 "r_mbytes_per_sec": 0, 00:11:42.621 "w_mbytes_per_sec": 0 00:11:42.621 }, 00:11:42.621 "claimed": false, 00:11:42.621 "zoned": false, 00:11:42.621 "supported_io_types": { 00:11:42.621 "read": true, 00:11:42.621 "write": true, 00:11:42.621 "unmap": true, 00:11:42.621 "write_zeroes": true, 00:11:42.621 "flush": true, 00:11:42.621 "reset": true, 00:11:42.621 "compare": false, 00:11:42.621 "compare_and_write": false, 00:11:42.621 "abort": true, 00:11:42.621 "nvme_admin": false, 00:11:42.621 "nvme_io": false 00:11:42.621 }, 00:11:42.621 "memory_domains": [ 00:11:42.621 { 00:11:42.621 "dma_device_id": "system", 00:11:42.621 "dma_device_type": 1 00:11:42.621 }, 00:11:42.621 { 00:11:42.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.621 "dma_device_type": 2 00:11:42.621 } 00:11:42.621 ], 00:11:42.621 "driver_specific": {} 00:11:42.621 } 00:11:42.621 ] 00:11:42.621 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:42.621 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:42.621 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:42.621 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.879 BaseBdev3 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:42.879 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.137 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.409 [ 00:11:43.409 { 00:11:43.409 "name": "BaseBdev3", 00:11:43.409 "aliases": [ 00:11:43.409 "684afeeb-2712-11ef-b084-113036b5c18d" 00:11:43.409 ], 00:11:43.409 "product_name": "Malloc disk", 00:11:43.409 "block_size": 512, 00:11:43.409 "num_blocks": 65536, 00:11:43.409 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:43.409 "assigned_rate_limits": { 00:11:43.409 "rw_ios_per_sec": 0, 00:11:43.409 "rw_mbytes_per_sec": 0, 00:11:43.409 "r_mbytes_per_sec": 0, 00:11:43.409 "w_mbytes_per_sec": 0 00:11:43.409 }, 00:11:43.409 "claimed": false, 00:11:43.409 "zoned": false, 00:11:43.409 "supported_io_types": { 00:11:43.409 "read": true, 00:11:43.409 "write": true, 00:11:43.409 "unmap": true, 00:11:43.409 "write_zeroes": true, 00:11:43.409 "flush": true, 00:11:43.409 "reset": true, 00:11:43.409 "compare": false, 00:11:43.409 "compare_and_write": false, 00:11:43.409 "abort": true, 00:11:43.409 "nvme_admin": false, 00:11:43.409 "nvme_io": false 00:11:43.409 }, 00:11:43.409 "memory_domains": [ 00:11:43.409 { 00:11:43.409 "dma_device_id": "system", 00:11:43.409 "dma_device_type": 1 00:11:43.409 }, 00:11:43.409 { 00:11:43.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.409 "dma_device_type": 2 00:11:43.409 } 00:11:43.409 ], 00:11:43.409 "driver_specific": {} 00:11:43.409 } 00:11:43.409 ] 00:11:43.409 10:15:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:43.409 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:43.409 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:43.409 10:15:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:43.668 [2024-06-10 10:15:49.197501] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.668 [2024-06-10 10:15:49.197551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.668 [2024-06-10 10:15:49.197558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.668 [2024-06-10 10:15:49.198005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.668 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.926 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.926 "name": "Existed_Raid", 00:11:43.926 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:43.926 "strip_size_kb": 0, 00:11:43.926 "state": "configuring", 00:11:43.926 "raid_level": "raid1", 00:11:43.926 "superblock": true, 00:11:43.926 "num_base_bdevs": 3, 00:11:43.926 "num_base_bdevs_discovered": 2, 00:11:43.926 "num_base_bdevs_operational": 3, 00:11:43.926 "base_bdevs_list": [ 00:11:43.926 { 00:11:43.926 "name": "BaseBdev1", 00:11:43.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.926 "is_configured": false, 00:11:43.926 "data_offset": 0, 00:11:43.926 "data_size": 0 00:11:43.926 }, 00:11:43.926 { 00:11:43.926 "name": "BaseBdev2", 00:11:43.926 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:43.926 "is_configured": true, 00:11:43.927 "data_offset": 2048, 00:11:43.927 "data_size": 63488 00:11:43.927 }, 00:11:43.927 { 00:11:43.927 "name": "BaseBdev3", 00:11:43.927 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:43.927 "is_configured": true, 00:11:43.927 "data_offset": 2048, 00:11:43.927 "data_size": 63488 00:11:43.927 } 00:11:43.927 ] 00:11:43.927 }' 00:11:43.927 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.927 10:15:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.492 10:15:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:44.492 [2024-06-10 10:15:50.065522] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.492 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.090 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.090 "name": "Existed_Raid", 00:11:45.090 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:45.090 "strip_size_kb": 0, 00:11:45.090 "state": "configuring", 00:11:45.090 "raid_level": "raid1", 00:11:45.090 "superblock": true, 00:11:45.090 "num_base_bdevs": 3, 00:11:45.090 "num_base_bdevs_discovered": 1, 00:11:45.090 "num_base_bdevs_operational": 3, 00:11:45.090 "base_bdevs_list": [ 00:11:45.090 { 00:11:45.090 "name": "BaseBdev1", 00:11:45.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.090 "is_configured": false, 00:11:45.090 "data_offset": 0, 00:11:45.090 "data_size": 0 00:11:45.090 }, 00:11:45.090 { 00:11:45.090 "name": null, 00:11:45.090 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:45.090 "is_configured": false, 00:11:45.090 "data_offset": 2048, 00:11:45.090 "data_size": 63488 00:11:45.090 }, 00:11:45.090 { 00:11:45.090 "name": "BaseBdev3", 00:11:45.090 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:45.090 "is_configured": true, 00:11:45.090 "data_offset": 2048, 00:11:45.090 "data_size": 63488 00:11:45.090 } 00:11:45.090 ] 00:11:45.090 }' 00:11:45.090 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.090 10:15:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.350 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.350 10:15:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.610 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:45.610 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.869 [2024-06-10 10:15:51.317661] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.869 BaseBdev1 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:45.869 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:46.128 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:46.387 [ 00:11:46.387 { 00:11:46.387 "name": "BaseBdev1", 00:11:46.387 "aliases": [ 00:11:46.387 "6a01da7a-2712-11ef-b084-113036b5c18d" 00:11:46.387 ], 00:11:46.387 "product_name": "Malloc disk", 00:11:46.387 "block_size": 512, 00:11:46.387 "num_blocks": 65536, 00:11:46.387 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:46.387 "assigned_rate_limits": { 00:11:46.387 "rw_ios_per_sec": 0, 00:11:46.387 "rw_mbytes_per_sec": 0, 00:11:46.387 "r_mbytes_per_sec": 0, 00:11:46.387 "w_mbytes_per_sec": 0 00:11:46.387 }, 00:11:46.387 "claimed": true, 00:11:46.387 "claim_type": "exclusive_write", 00:11:46.388 "zoned": false, 00:11:46.388 "supported_io_types": { 00:11:46.388 "read": true, 00:11:46.388 "write": true, 00:11:46.388 "unmap": true, 00:11:46.388 "write_zeroes": true, 00:11:46.388 "flush": true, 00:11:46.388 "reset": true, 00:11:46.388 "compare": false, 00:11:46.388 "compare_and_write": false, 00:11:46.388 "abort": true, 00:11:46.388 "nvme_admin": false, 00:11:46.388 "nvme_io": false 00:11:46.388 }, 00:11:46.388 "memory_domains": [ 00:11:46.388 { 00:11:46.388 "dma_device_id": "system", 00:11:46.388 "dma_device_type": 1 00:11:46.388 }, 00:11:46.388 { 00:11:46.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.388 "dma_device_type": 2 00:11:46.388 } 00:11:46.388 ], 00:11:46.388 "driver_specific": {} 00:11:46.388 } 00:11:46.388 ] 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.388 10:15:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.649 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.649 "name": "Existed_Raid", 00:11:46.649 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:46.649 "strip_size_kb": 0, 00:11:46.649 "state": "configuring", 00:11:46.649 "raid_level": "raid1", 00:11:46.649 "superblock": true, 00:11:46.649 "num_base_bdevs": 3, 00:11:46.649 "num_base_bdevs_discovered": 2, 00:11:46.649 "num_base_bdevs_operational": 3, 00:11:46.649 "base_bdevs_list": [ 00:11:46.649 { 00:11:46.649 "name": "BaseBdev1", 00:11:46.649 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:46.649 "is_configured": true, 00:11:46.649 "data_offset": 2048, 00:11:46.649 "data_size": 63488 00:11:46.649 }, 00:11:46.649 { 00:11:46.649 "name": null, 00:11:46.649 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:46.649 "is_configured": false, 00:11:46.649 "data_offset": 2048, 00:11:46.649 "data_size": 63488 00:11:46.649 }, 00:11:46.649 { 00:11:46.649 "name": "BaseBdev3", 00:11:46.649 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:46.649 "is_configured": true, 00:11:46.649 "data_offset": 2048, 00:11:46.649 "data_size": 63488 00:11:46.649 } 00:11:46.649 ] 00:11:46.649 }' 00:11:46.649 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.649 10:15:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.909 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.909 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.167 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:47.167 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:47.426 [2024-06-10 10:15:52.913624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.426 10:15:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.684 10:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.684 "name": "Existed_Raid", 00:11:47.684 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:47.684 "strip_size_kb": 0, 00:11:47.684 "state": "configuring", 00:11:47.684 "raid_level": "raid1", 00:11:47.684 "superblock": true, 00:11:47.684 "num_base_bdevs": 3, 00:11:47.684 "num_base_bdevs_discovered": 1, 00:11:47.684 "num_base_bdevs_operational": 3, 00:11:47.684 "base_bdevs_list": [ 00:11:47.684 { 00:11:47.684 "name": "BaseBdev1", 00:11:47.684 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:47.684 "is_configured": true, 00:11:47.684 "data_offset": 2048, 00:11:47.684 "data_size": 63488 00:11:47.684 }, 00:11:47.684 { 00:11:47.684 "name": null, 00:11:47.684 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:47.684 "is_configured": false, 00:11:47.684 "data_offset": 2048, 00:11:47.684 "data_size": 63488 00:11:47.684 }, 00:11:47.684 { 00:11:47.684 "name": null, 00:11:47.684 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:47.684 "is_configured": false, 00:11:47.684 "data_offset": 2048, 00:11:47.684 "data_size": 63488 00:11:47.684 } 00:11:47.684 ] 00:11:47.684 }' 00:11:47.684 10:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.684 10:15:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 10:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.251 10:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.251 10:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:48.251 10:15:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:48.510 [2024-06-10 10:15:54.053667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.510 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.769 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.769 "name": "Existed_Raid", 00:11:48.769 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:48.769 "strip_size_kb": 0, 00:11:48.769 "state": "configuring", 00:11:48.769 "raid_level": "raid1", 00:11:48.769 "superblock": true, 00:11:48.769 "num_base_bdevs": 3, 00:11:48.769 "num_base_bdevs_discovered": 2, 00:11:48.769 "num_base_bdevs_operational": 3, 00:11:48.769 "base_bdevs_list": [ 00:11:48.769 { 00:11:48.769 "name": "BaseBdev1", 00:11:48.769 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:48.769 "is_configured": true, 00:11:48.769 "data_offset": 2048, 00:11:48.769 "data_size": 63488 00:11:48.769 }, 00:11:48.769 { 00:11:48.769 "name": null, 00:11:48.769 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:48.769 "is_configured": false, 00:11:48.769 "data_offset": 2048, 00:11:48.769 "data_size": 63488 00:11:48.769 }, 00:11:48.769 { 00:11:48.769 "name": "BaseBdev3", 00:11:48.769 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:48.769 "is_configured": true, 00:11:48.769 "data_offset": 2048, 00:11:48.769 "data_size": 63488 00:11:48.769 } 00:11:48.769 ] 00:11:48.769 }' 00:11:48.769 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.769 10:15:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.339 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.339 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.599 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:49.599 10:15:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:49.858 [2024-06-10 10:15:55.241700] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.858 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.116 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.116 "name": "Existed_Raid", 00:11:50.116 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:50.116 "strip_size_kb": 0, 00:11:50.116 "state": "configuring", 00:11:50.116 "raid_level": "raid1", 00:11:50.116 "superblock": true, 00:11:50.116 "num_base_bdevs": 3, 00:11:50.116 "num_base_bdevs_discovered": 1, 00:11:50.116 "num_base_bdevs_operational": 3, 00:11:50.116 "base_bdevs_list": [ 00:11:50.116 { 00:11:50.116 "name": null, 00:11:50.116 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:50.116 "is_configured": false, 00:11:50.116 "data_offset": 2048, 00:11:50.116 "data_size": 63488 00:11:50.116 }, 00:11:50.116 { 00:11:50.116 "name": null, 00:11:50.116 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:50.116 "is_configured": false, 00:11:50.116 "data_offset": 2048, 00:11:50.116 "data_size": 63488 00:11:50.116 }, 00:11:50.116 { 00:11:50.116 "name": "BaseBdev3", 00:11:50.116 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:50.116 "is_configured": true, 00:11:50.116 "data_offset": 2048, 00:11:50.116 "data_size": 63488 00:11:50.116 } 00:11:50.116 ] 00:11:50.116 }' 00:11:50.116 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.116 10:15:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.375 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.375 10:15:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.633 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:50.633 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.892 [2024-06-10 10:15:56.434455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.892 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.458 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.458 "name": "Existed_Raid", 00:11:51.458 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:51.458 "strip_size_kb": 0, 00:11:51.458 "state": "configuring", 00:11:51.458 "raid_level": "raid1", 00:11:51.458 "superblock": true, 00:11:51.459 "num_base_bdevs": 3, 00:11:51.459 "num_base_bdevs_discovered": 2, 00:11:51.459 "num_base_bdevs_operational": 3, 00:11:51.459 "base_bdevs_list": [ 00:11:51.459 { 00:11:51.459 "name": null, 00:11:51.459 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:51.459 "is_configured": false, 00:11:51.459 "data_offset": 2048, 00:11:51.459 "data_size": 63488 00:11:51.459 }, 00:11:51.459 { 00:11:51.459 "name": "BaseBdev2", 00:11:51.459 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:51.459 "is_configured": true, 00:11:51.459 "data_offset": 2048, 00:11:51.459 "data_size": 63488 00:11:51.459 }, 00:11:51.459 { 00:11:51.459 "name": "BaseBdev3", 00:11:51.459 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:51.459 "is_configured": true, 00:11:51.459 "data_offset": 2048, 00:11:51.459 "data_size": 63488 00:11:51.459 } 00:11:51.459 ] 00:11:51.459 }' 00:11:51.459 10:15:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.459 10:15:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.717 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.717 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:51.976 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:51.976 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.976 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.234 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6a01da7a-2712-11ef-b084-113036b5c18d 00:11:52.492 [2024-06-10 10:15:57.890609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.492 [2024-06-10 10:15:57.890655] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ac14f00 00:11:52.492 [2024-06-10 10:15:57.890660] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.492 [2024-06-10 10:15:57.890679] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac77e20 00:11:52.492 [2024-06-10 10:15:57.890714] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ac14f00 00:11:52.492 [2024-06-10 10:15:57.890717] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ac14f00 00:11:52.492 [2024-06-10 10:15:57.890733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.492 NewBaseBdev 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:11:52.492 10:15:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.751 10:15:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:53.013 [ 00:11:53.013 { 00:11:53.013 "name": "NewBaseBdev", 00:11:53.013 "aliases": [ 00:11:53.013 "6a01da7a-2712-11ef-b084-113036b5c18d" 00:11:53.013 ], 00:11:53.013 "product_name": "Malloc disk", 00:11:53.013 "block_size": 512, 00:11:53.013 "num_blocks": 65536, 00:11:53.013 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:53.013 "assigned_rate_limits": { 00:11:53.013 "rw_ios_per_sec": 0, 00:11:53.013 "rw_mbytes_per_sec": 0, 00:11:53.013 "r_mbytes_per_sec": 0, 00:11:53.013 "w_mbytes_per_sec": 0 00:11:53.013 }, 00:11:53.013 "claimed": true, 00:11:53.013 "claim_type": "exclusive_write", 00:11:53.013 "zoned": false, 00:11:53.013 "supported_io_types": { 00:11:53.013 "read": true, 00:11:53.013 "write": true, 00:11:53.013 "unmap": true, 00:11:53.013 "write_zeroes": true, 00:11:53.013 "flush": true, 00:11:53.013 "reset": true, 00:11:53.013 "compare": false, 00:11:53.013 "compare_and_write": false, 00:11:53.013 "abort": true, 00:11:53.013 "nvme_admin": false, 00:11:53.013 "nvme_io": false 00:11:53.013 }, 00:11:53.013 "memory_domains": [ 00:11:53.013 { 00:11:53.013 "dma_device_id": "system", 00:11:53.013 "dma_device_type": 1 00:11:53.013 }, 00:11:53.013 { 00:11:53.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.013 "dma_device_type": 2 00:11:53.013 } 00:11:53.013 ], 00:11:53.013 "driver_specific": {} 00:11:53.013 } 00:11:53.013 ] 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.013 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.272 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.272 "name": "Existed_Raid", 00:11:53.272 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:53.272 "strip_size_kb": 0, 00:11:53.272 "state": "online", 00:11:53.272 "raid_level": "raid1", 00:11:53.272 "superblock": true, 00:11:53.272 "num_base_bdevs": 3, 00:11:53.272 "num_base_bdevs_discovered": 3, 00:11:53.272 "num_base_bdevs_operational": 3, 00:11:53.272 "base_bdevs_list": [ 00:11:53.272 { 00:11:53.272 "name": "NewBaseBdev", 00:11:53.272 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:53.272 "is_configured": true, 00:11:53.272 "data_offset": 2048, 00:11:53.272 "data_size": 63488 00:11:53.272 }, 00:11:53.272 { 00:11:53.272 "name": "BaseBdev2", 00:11:53.272 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:53.272 "is_configured": true, 00:11:53.272 "data_offset": 2048, 00:11:53.272 "data_size": 63488 00:11:53.272 }, 00:11:53.272 { 00:11:53.272 "name": "BaseBdev3", 00:11:53.272 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:53.272 "is_configured": true, 00:11:53.272 "data_offset": 2048, 00:11:53.272 "data_size": 63488 00:11:53.272 } 00:11:53.272 ] 00:11:53.272 }' 00:11:53.272 10:15:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.272 10:15:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.530 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.530 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:53.530 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:53.530 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:53.530 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:53.531 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:53.531 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:53.531 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:53.789 [2024-06-10 10:15:59.290599] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.789 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:53.789 "name": "Existed_Raid", 00:11:53.789 "aliases": [ 00:11:53.789 "68be5b2e-2712-11ef-b084-113036b5c18d" 00:11:53.789 ], 00:11:53.789 "product_name": "Raid Volume", 00:11:53.789 "block_size": 512, 00:11:53.789 "num_blocks": 63488, 00:11:53.789 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:53.789 "assigned_rate_limits": { 00:11:53.789 "rw_ios_per_sec": 0, 00:11:53.789 "rw_mbytes_per_sec": 0, 00:11:53.789 "r_mbytes_per_sec": 0, 00:11:53.789 "w_mbytes_per_sec": 0 00:11:53.789 }, 00:11:53.789 "claimed": false, 00:11:53.789 "zoned": false, 00:11:53.789 "supported_io_types": { 00:11:53.789 "read": true, 00:11:53.789 "write": true, 00:11:53.789 "unmap": false, 00:11:53.789 "write_zeroes": true, 00:11:53.789 "flush": false, 00:11:53.789 "reset": true, 00:11:53.789 "compare": false, 00:11:53.789 "compare_and_write": false, 00:11:53.789 "abort": false, 00:11:53.789 "nvme_admin": false, 00:11:53.789 "nvme_io": false 00:11:53.789 }, 00:11:53.789 "memory_domains": [ 00:11:53.789 { 00:11:53.789 "dma_device_id": "system", 00:11:53.789 "dma_device_type": 1 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.789 "dma_device_type": 2 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "dma_device_id": "system", 00:11:53.789 "dma_device_type": 1 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.789 "dma_device_type": 2 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "dma_device_id": "system", 00:11:53.789 "dma_device_type": 1 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.789 "dma_device_type": 2 00:11:53.789 } 00:11:53.789 ], 00:11:53.789 "driver_specific": { 00:11:53.789 "raid": { 00:11:53.789 "uuid": "68be5b2e-2712-11ef-b084-113036b5c18d", 00:11:53.789 "strip_size_kb": 0, 00:11:53.789 "state": "online", 00:11:53.789 "raid_level": "raid1", 00:11:53.789 "superblock": true, 00:11:53.789 "num_base_bdevs": 3, 00:11:53.789 "num_base_bdevs_discovered": 3, 00:11:53.789 "num_base_bdevs_operational": 3, 00:11:53.789 "base_bdevs_list": [ 00:11:53.789 { 00:11:53.789 "name": "NewBaseBdev", 00:11:53.789 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:53.789 "is_configured": true, 00:11:53.789 "data_offset": 2048, 00:11:53.789 "data_size": 63488 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "name": "BaseBdev2", 00:11:53.789 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:53.789 "is_configured": true, 00:11:53.789 "data_offset": 2048, 00:11:53.789 "data_size": 63488 00:11:53.789 }, 00:11:53.789 { 00:11:53.789 "name": "BaseBdev3", 00:11:53.789 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:53.789 "is_configured": true, 00:11:53.789 "data_offset": 2048, 00:11:53.789 "data_size": 63488 00:11:53.789 } 00:11:53.789 ] 00:11:53.789 } 00:11:53.789 } 00:11:53.789 }' 00:11:53.789 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.789 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:53.789 BaseBdev2 00:11:53.789 BaseBdev3' 00:11:53.789 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:53.789 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:53.789 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.048 "name": "NewBaseBdev", 00:11:54.048 "aliases": [ 00:11:54.048 "6a01da7a-2712-11ef-b084-113036b5c18d" 00:11:54.048 ], 00:11:54.048 "product_name": "Malloc disk", 00:11:54.048 "block_size": 512, 00:11:54.048 "num_blocks": 65536, 00:11:54.048 "uuid": "6a01da7a-2712-11ef-b084-113036b5c18d", 00:11:54.048 "assigned_rate_limits": { 00:11:54.048 "rw_ios_per_sec": 0, 00:11:54.048 "rw_mbytes_per_sec": 0, 00:11:54.048 "r_mbytes_per_sec": 0, 00:11:54.048 "w_mbytes_per_sec": 0 00:11:54.048 }, 00:11:54.048 "claimed": true, 00:11:54.048 "claim_type": "exclusive_write", 00:11:54.048 "zoned": false, 00:11:54.048 "supported_io_types": { 00:11:54.048 "read": true, 00:11:54.048 "write": true, 00:11:54.048 "unmap": true, 00:11:54.048 "write_zeroes": true, 00:11:54.048 "flush": true, 00:11:54.048 "reset": true, 00:11:54.048 "compare": false, 00:11:54.048 "compare_and_write": false, 00:11:54.048 "abort": true, 00:11:54.048 "nvme_admin": false, 00:11:54.048 "nvme_io": false 00:11:54.048 }, 00:11:54.048 "memory_domains": [ 00:11:54.048 { 00:11:54.048 "dma_device_id": "system", 00:11:54.048 "dma_device_type": 1 00:11:54.048 }, 00:11:54.048 { 00:11:54.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.048 "dma_device_type": 2 00:11:54.048 } 00:11:54.048 ], 00:11:54.048 "driver_specific": {} 00:11:54.048 }' 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:54.048 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.614 "name": "BaseBdev2", 00:11:54.614 "aliases": [ 00:11:54.614 "67d188da-2712-11ef-b084-113036b5c18d" 00:11:54.614 ], 00:11:54.614 "product_name": "Malloc disk", 00:11:54.614 "block_size": 512, 00:11:54.614 "num_blocks": 65536, 00:11:54.614 "uuid": "67d188da-2712-11ef-b084-113036b5c18d", 00:11:54.614 "assigned_rate_limits": { 00:11:54.614 "rw_ios_per_sec": 0, 00:11:54.614 "rw_mbytes_per_sec": 0, 00:11:54.614 "r_mbytes_per_sec": 0, 00:11:54.614 "w_mbytes_per_sec": 0 00:11:54.614 }, 00:11:54.614 "claimed": true, 00:11:54.614 "claim_type": "exclusive_write", 00:11:54.614 "zoned": false, 00:11:54.614 "supported_io_types": { 00:11:54.614 "read": true, 00:11:54.614 "write": true, 00:11:54.614 "unmap": true, 00:11:54.614 "write_zeroes": true, 00:11:54.614 "flush": true, 00:11:54.614 "reset": true, 00:11:54.614 "compare": false, 00:11:54.614 "compare_and_write": false, 00:11:54.614 "abort": true, 00:11:54.614 "nvme_admin": false, 00:11:54.614 "nvme_io": false 00:11:54.614 }, 00:11:54.614 "memory_domains": [ 00:11:54.614 { 00:11:54.614 "dma_device_id": "system", 00:11:54.614 "dma_device_type": 1 00:11:54.614 }, 00:11:54.614 { 00:11:54.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.614 "dma_device_type": 2 00:11:54.614 } 00:11:54.614 ], 00:11:54.614 "driver_specific": {} 00:11:54.614 }' 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.614 10:15:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:54.872 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.872 "name": "BaseBdev3", 00:11:54.872 "aliases": [ 00:11:54.872 "684afeeb-2712-11ef-b084-113036b5c18d" 00:11:54.872 ], 00:11:54.872 "product_name": "Malloc disk", 00:11:54.872 "block_size": 512, 00:11:54.872 "num_blocks": 65536, 00:11:54.872 "uuid": "684afeeb-2712-11ef-b084-113036b5c18d", 00:11:54.872 "assigned_rate_limits": { 00:11:54.872 "rw_ios_per_sec": 0, 00:11:54.872 "rw_mbytes_per_sec": 0, 00:11:54.872 "r_mbytes_per_sec": 0, 00:11:54.872 "w_mbytes_per_sec": 0 00:11:54.872 }, 00:11:54.873 "claimed": true, 00:11:54.873 "claim_type": "exclusive_write", 00:11:54.873 "zoned": false, 00:11:54.873 "supported_io_types": { 00:11:54.873 "read": true, 00:11:54.873 "write": true, 00:11:54.873 "unmap": true, 00:11:54.873 "write_zeroes": true, 00:11:54.873 "flush": true, 00:11:54.873 "reset": true, 00:11:54.873 "compare": false, 00:11:54.873 "compare_and_write": false, 00:11:54.873 "abort": true, 00:11:54.873 "nvme_admin": false, 00:11:54.873 "nvme_io": false 00:11:54.873 }, 00:11:54.873 "memory_domains": [ 00:11:54.873 { 00:11:54.873 "dma_device_id": "system", 00:11:54.873 "dma_device_type": 1 00:11:54.873 }, 00:11:54.873 { 00:11:54.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.873 "dma_device_type": 2 00:11:54.873 } 00:11:54.873 ], 00:11:54.873 "driver_specific": {} 00:11:54.873 }' 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.873 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:55.131 [2024-06-10 10:16:00.546616] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.131 [2024-06-10 10:16:00.546643] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.131 [2024-06-10 10:16:00.546663] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.131 [2024-06-10 10:16:00.546729] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.131 [2024-06-10 10:16:00.546733] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ac14f00 name Existed_Raid, state offline 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 57591 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 57591 ']' 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 57591 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 57591 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:11:55.131 killing process with pid 57591 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 57591' 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 57591 00:11:55.131 [2024-06-10 10:16:00.576614] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.131 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 57591 00:11:55.131 [2024-06-10 10:16:00.591020] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.390 10:16:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:55.390 00:11:55.390 real 0m25.748s 00:11:55.390 user 0m47.302s 00:11:55.390 sys 0m3.434s 00:11:55.390 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:55.390 10:16:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.390 ************************************ 00:11:55.390 END TEST raid_state_function_test_sb 00:11:55.390 ************************************ 00:11:55.390 10:16:00 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:55.390 10:16:00 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:11:55.390 10:16:00 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:55.390 10:16:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.390 ************************************ 00:11:55.390 START TEST raid_superblock_test 00:11:55.390 ************************************ 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 3 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=58327 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 58327 /var/tmp/spdk-raid.sock 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 58327 ']' 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:55.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:55.390 10:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.390 [2024-06-10 10:16:00.816907] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:11:55.390 [2024-06-10 10:16:00.817111] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:55.956 EAL: TSC is not safe to use in SMP mode 00:11:55.956 EAL: TSC is not invariant 00:11:55.956 [2024-06-10 10:16:01.321196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.956 [2024-06-10 10:16:01.403974] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:55.956 [2024-06-10 10:16:01.406194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.956 [2024-06-10 10:16:01.406989] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.956 [2024-06-10 10:16:01.407011] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.521 10:16:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:56.786 malloc1 00:11:56.786 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.048 [2024-06-10 10:16:02.469709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.048 [2024-06-10 10:16:02.469775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.048 [2024-06-10 10:16:02.469790] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b037780 00:11:57.048 [2024-06-10 10:16:02.469800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.048 [2024-06-10 10:16:02.470675] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.048 [2024-06-10 10:16:02.470715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.048 pt1 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.048 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:57.307 malloc2 00:11:57.307 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.566 [2024-06-10 10:16:02.981734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.566 [2024-06-10 10:16:02.981814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.566 [2024-06-10 10:16:02.981830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b037c80 00:11:57.566 [2024-06-10 10:16:02.981842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.566 [2024-06-10 10:16:02.982479] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.566 [2024-06-10 10:16:02.982523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.566 pt2 00:11:57.566 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:57.566 10:16:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:57.566 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:57.825 malloc3 00:11:57.825 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:58.083 [2024-06-10 10:16:03.493748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:58.083 [2024-06-10 10:16:03.493816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.083 [2024-06-10 10:16:03.493832] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b038180 00:11:58.083 [2024-06-10 10:16:03.493843] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.083 [2024-06-10 10:16:03.494467] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.083 [2024-06-10 10:16:03.494508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:58.083 pt3 00:11:58.083 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:58.083 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:58.083 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:58.342 [2024-06-10 10:16:03.777759] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:58.342 [2024-06-10 10:16:03.778272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.342 [2024-06-10 10:16:03.778303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:58.342 [2024-06-10 10:16:03.778367] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b038400 00:11:58.342 [2024-06-10 10:16:03.778379] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.342 [2024-06-10 10:16:03.778428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b09ae20 00:11:58.342 [2024-06-10 10:16:03.778506] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b038400 00:11:58.342 [2024-06-10 10:16:03.778517] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b038400 00:11:58.342 [2024-06-10 10:16:03.778552] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.342 10:16:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.600 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.600 "name": "raid_bdev1", 00:11:58.600 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:11:58.600 "strip_size_kb": 0, 00:11:58.600 "state": "online", 00:11:58.600 "raid_level": "raid1", 00:11:58.600 "superblock": true, 00:11:58.600 "num_base_bdevs": 3, 00:11:58.600 "num_base_bdevs_discovered": 3, 00:11:58.600 "num_base_bdevs_operational": 3, 00:11:58.600 "base_bdevs_list": [ 00:11:58.600 { 00:11:58.600 "name": "pt1", 00:11:58.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.600 "is_configured": true, 00:11:58.600 "data_offset": 2048, 00:11:58.600 "data_size": 63488 00:11:58.600 }, 00:11:58.600 { 00:11:58.600 "name": "pt2", 00:11:58.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.600 "is_configured": true, 00:11:58.600 "data_offset": 2048, 00:11:58.600 "data_size": 63488 00:11:58.600 }, 00:11:58.600 { 00:11:58.600 "name": "pt3", 00:11:58.600 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.600 "is_configured": true, 00:11:58.600 "data_offset": 2048, 00:11:58.600 "data_size": 63488 00:11:58.600 } 00:11:58.600 ] 00:11:58.600 }' 00:11:58.600 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.600 10:16:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:58.858 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:59.116 [2024-06-10 10:16:04.605808] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.116 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:59.116 "name": "raid_bdev1", 00:11:59.116 "aliases": [ 00:11:59.116 "716f2050-2712-11ef-b084-113036b5c18d" 00:11:59.116 ], 00:11:59.116 "product_name": "Raid Volume", 00:11:59.116 "block_size": 512, 00:11:59.116 "num_blocks": 63488, 00:11:59.116 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:11:59.116 "assigned_rate_limits": { 00:11:59.116 "rw_ios_per_sec": 0, 00:11:59.116 "rw_mbytes_per_sec": 0, 00:11:59.116 "r_mbytes_per_sec": 0, 00:11:59.116 "w_mbytes_per_sec": 0 00:11:59.116 }, 00:11:59.116 "claimed": false, 00:11:59.116 "zoned": false, 00:11:59.116 "supported_io_types": { 00:11:59.116 "read": true, 00:11:59.116 "write": true, 00:11:59.116 "unmap": false, 00:11:59.116 "write_zeroes": true, 00:11:59.116 "flush": false, 00:11:59.116 "reset": true, 00:11:59.116 "compare": false, 00:11:59.116 "compare_and_write": false, 00:11:59.116 "abort": false, 00:11:59.116 "nvme_admin": false, 00:11:59.116 "nvme_io": false 00:11:59.116 }, 00:11:59.116 "memory_domains": [ 00:11:59.116 { 00:11:59.116 "dma_device_id": "system", 00:11:59.116 "dma_device_type": 1 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.116 "dma_device_type": 2 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "dma_device_id": "system", 00:11:59.116 "dma_device_type": 1 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.116 "dma_device_type": 2 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "dma_device_id": "system", 00:11:59.116 "dma_device_type": 1 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.116 "dma_device_type": 2 00:11:59.116 } 00:11:59.116 ], 00:11:59.116 "driver_specific": { 00:11:59.116 "raid": { 00:11:59.116 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:11:59.116 "strip_size_kb": 0, 00:11:59.116 "state": "online", 00:11:59.116 "raid_level": "raid1", 00:11:59.116 "superblock": true, 00:11:59.116 "num_base_bdevs": 3, 00:11:59.116 "num_base_bdevs_discovered": 3, 00:11:59.116 "num_base_bdevs_operational": 3, 00:11:59.116 "base_bdevs_list": [ 00:11:59.116 { 00:11:59.116 "name": "pt1", 00:11:59.116 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.116 "is_configured": true, 00:11:59.116 "data_offset": 2048, 00:11:59.116 "data_size": 63488 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "name": "pt2", 00:11:59.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.116 "is_configured": true, 00:11:59.116 "data_offset": 2048, 00:11:59.116 "data_size": 63488 00:11:59.116 }, 00:11:59.116 { 00:11:59.116 "name": "pt3", 00:11:59.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.116 "is_configured": true, 00:11:59.116 "data_offset": 2048, 00:11:59.116 "data_size": 63488 00:11:59.116 } 00:11:59.116 ] 00:11:59.116 } 00:11:59.116 } 00:11:59.117 }' 00:11:59.117 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.117 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:59.117 pt2 00:11:59.117 pt3' 00:11:59.117 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:59.117 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:59.117 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:59.374 "name": "pt1", 00:11:59.374 "aliases": [ 00:11:59.374 "00000000-0000-0000-0000-000000000001" 00:11:59.374 ], 00:11:59.374 "product_name": "passthru", 00:11:59.374 "block_size": 512, 00:11:59.374 "num_blocks": 65536, 00:11:59.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:59.374 "assigned_rate_limits": { 00:11:59.374 "rw_ios_per_sec": 0, 00:11:59.374 "rw_mbytes_per_sec": 0, 00:11:59.374 "r_mbytes_per_sec": 0, 00:11:59.374 "w_mbytes_per_sec": 0 00:11:59.374 }, 00:11:59.374 "claimed": true, 00:11:59.374 "claim_type": "exclusive_write", 00:11:59.374 "zoned": false, 00:11:59.374 "supported_io_types": { 00:11:59.374 "read": true, 00:11:59.374 "write": true, 00:11:59.374 "unmap": true, 00:11:59.374 "write_zeroes": true, 00:11:59.374 "flush": true, 00:11:59.374 "reset": true, 00:11:59.374 "compare": false, 00:11:59.374 "compare_and_write": false, 00:11:59.374 "abort": true, 00:11:59.374 "nvme_admin": false, 00:11:59.374 "nvme_io": false 00:11:59.374 }, 00:11:59.374 "memory_domains": [ 00:11:59.374 { 00:11:59.374 "dma_device_id": "system", 00:11:59.374 "dma_device_type": 1 00:11:59.374 }, 00:11:59.374 { 00:11:59.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.374 "dma_device_type": 2 00:11:59.374 } 00:11:59.374 ], 00:11:59.374 "driver_specific": { 00:11:59.374 "passthru": { 00:11:59.374 "name": "pt1", 00:11:59.374 "base_bdev_name": "malloc1" 00:11:59.374 } 00:11:59.374 } 00:11:59.374 }' 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:59.374 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.632 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.632 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:59.632 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:59.632 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:59.632 10:16:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:59.892 "name": "pt2", 00:11:59.892 "aliases": [ 00:11:59.892 "00000000-0000-0000-0000-000000000002" 00:11:59.892 ], 00:11:59.892 "product_name": "passthru", 00:11:59.892 "block_size": 512, 00:11:59.892 "num_blocks": 65536, 00:11:59.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.892 "assigned_rate_limits": { 00:11:59.892 "rw_ios_per_sec": 0, 00:11:59.892 "rw_mbytes_per_sec": 0, 00:11:59.892 "r_mbytes_per_sec": 0, 00:11:59.892 "w_mbytes_per_sec": 0 00:11:59.892 }, 00:11:59.892 "claimed": true, 00:11:59.892 "claim_type": "exclusive_write", 00:11:59.892 "zoned": false, 00:11:59.892 "supported_io_types": { 00:11:59.892 "read": true, 00:11:59.892 "write": true, 00:11:59.892 "unmap": true, 00:11:59.892 "write_zeroes": true, 00:11:59.892 "flush": true, 00:11:59.892 "reset": true, 00:11:59.892 "compare": false, 00:11:59.892 "compare_and_write": false, 00:11:59.892 "abort": true, 00:11:59.892 "nvme_admin": false, 00:11:59.892 "nvme_io": false 00:11:59.892 }, 00:11:59.892 "memory_domains": [ 00:11:59.892 { 00:11:59.892 "dma_device_id": "system", 00:11:59.892 "dma_device_type": 1 00:11:59.892 }, 00:11:59.892 { 00:11:59.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.892 "dma_device_type": 2 00:11:59.892 } 00:11:59.892 ], 00:11:59.892 "driver_specific": { 00:11:59.892 "passthru": { 00:11:59.892 "name": "pt2", 00:11:59.892 "base_bdev_name": "malloc2" 00:11:59.892 } 00:11:59.892 } 00:11:59.892 }' 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:59.892 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:00.194 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:00.194 "name": "pt3", 00:12:00.194 "aliases": [ 00:12:00.194 "00000000-0000-0000-0000-000000000003" 00:12:00.194 ], 00:12:00.194 "product_name": "passthru", 00:12:00.194 "block_size": 512, 00:12:00.194 "num_blocks": 65536, 00:12:00.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.194 "assigned_rate_limits": { 00:12:00.194 "rw_ios_per_sec": 0, 00:12:00.194 "rw_mbytes_per_sec": 0, 00:12:00.194 "r_mbytes_per_sec": 0, 00:12:00.194 "w_mbytes_per_sec": 0 00:12:00.194 }, 00:12:00.194 "claimed": true, 00:12:00.194 "claim_type": "exclusive_write", 00:12:00.194 "zoned": false, 00:12:00.194 "supported_io_types": { 00:12:00.194 "read": true, 00:12:00.194 "write": true, 00:12:00.194 "unmap": true, 00:12:00.194 "write_zeroes": true, 00:12:00.194 "flush": true, 00:12:00.194 "reset": true, 00:12:00.194 "compare": false, 00:12:00.194 "compare_and_write": false, 00:12:00.194 "abort": true, 00:12:00.194 "nvme_admin": false, 00:12:00.194 "nvme_io": false 00:12:00.194 }, 00:12:00.194 "memory_domains": [ 00:12:00.194 { 00:12:00.194 "dma_device_id": "system", 00:12:00.194 "dma_device_type": 1 00:12:00.194 }, 00:12:00.194 { 00:12:00.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.194 "dma_device_type": 2 00:12:00.194 } 00:12:00.194 ], 00:12:00.194 "driver_specific": { 00:12:00.194 "passthru": { 00:12:00.194 "name": "pt3", 00:12:00.195 "base_bdev_name": "malloc3" 00:12:00.195 } 00:12:00.195 } 00:12:00.195 }' 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:00.195 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:00.508 [2024-06-10 10:16:05.833823] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.508 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=716f2050-2712-11ef-b084-113036b5c18d 00:12:00.508 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 716f2050-2712-11ef-b084-113036b5c18d ']' 00:12:00.508 10:16:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:00.508 [2024-06-10 10:16:06.045810] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.508 [2024-06-10 10:16:06.045839] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.508 [2024-06-10 10:16:06.045859] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.508 [2024-06-10 10:16:06.045875] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.508 [2024-06-10 10:16:06.045879] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b038400 name raid_bdev1, state offline 00:12:00.508 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:00.508 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.766 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:00.766 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:00.766 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:00.766 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:01.025 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.025 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:01.283 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.283 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:01.541 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:01.541 10:16:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:01.801 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:02.060 [2024-06-10 10:16:07.541865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:02.060 [2024-06-10 10:16:07.542424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:02.060 [2024-06-10 10:16:07.542444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:02.060 [2024-06-10 10:16:07.542456] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:02.060 [2024-06-10 10:16:07.542512] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:02.060 [2024-06-10 10:16:07.542523] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:02.060 [2024-06-10 10:16:07.542531] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.060 [2024-06-10 10:16:07.542536] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b038180 name raid_bdev1, state configuring 00:12:02.060 request: 00:12:02.060 { 00:12:02.060 "name": "raid_bdev1", 00:12:02.060 "raid_level": "raid1", 00:12:02.060 "base_bdevs": [ 00:12:02.060 "malloc1", 00:12:02.060 "malloc2", 00:12:02.060 "malloc3" 00:12:02.060 ], 00:12:02.060 "superblock": false, 00:12:02.060 "method": "bdev_raid_create", 00:12:02.060 "req_id": 1 00:12:02.060 } 00:12:02.060 Got JSON-RPC error response 00:12:02.060 response: 00:12:02.060 { 00:12:02.060 "code": -17, 00:12:02.060 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:02.060 } 00:12:02.060 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:12:02.060 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:02.060 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:02.060 10:16:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:02.060 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.060 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:02.318 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:02.318 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:02.318 10:16:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:02.576 [2024-06-10 10:16:08.057870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:02.576 [2024-06-10 10:16:08.057922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.576 [2024-06-10 10:16:08.057933] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b037c80 00:12:02.576 [2024-06-10 10:16:08.057942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.577 [2024-06-10 10:16:08.058428] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.577 [2024-06-10 10:16:08.058459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:02.577 [2024-06-10 10:16:08.058479] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:02.577 [2024-06-10 10:16:08.058489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:02.577 pt1 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.577 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.836 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.836 "name": "raid_bdev1", 00:12:02.836 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:02.836 "strip_size_kb": 0, 00:12:02.836 "state": "configuring", 00:12:02.836 "raid_level": "raid1", 00:12:02.836 "superblock": true, 00:12:02.836 "num_base_bdevs": 3, 00:12:02.836 "num_base_bdevs_discovered": 1, 00:12:02.836 "num_base_bdevs_operational": 3, 00:12:02.836 "base_bdevs_list": [ 00:12:02.836 { 00:12:02.836 "name": "pt1", 00:12:02.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.836 "is_configured": true, 00:12:02.836 "data_offset": 2048, 00:12:02.836 "data_size": 63488 00:12:02.836 }, 00:12:02.836 { 00:12:02.836 "name": null, 00:12:02.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.836 "is_configured": false, 00:12:02.836 "data_offset": 2048, 00:12:02.836 "data_size": 63488 00:12:02.836 }, 00:12:02.836 { 00:12:02.836 "name": null, 00:12:02.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.836 "is_configured": false, 00:12:02.836 "data_offset": 2048, 00:12:02.836 "data_size": 63488 00:12:02.836 } 00:12:02.836 ] 00:12:02.836 }' 00:12:02.836 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.836 10:16:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.094 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:03.094 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.353 [2024-06-10 10:16:08.953950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.353 [2024-06-10 10:16:08.954027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.353 [2024-06-10 10:16:08.954064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b038680 00:12:03.353 [2024-06-10 10:16:08.954080] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.353 [2024-06-10 10:16:08.954199] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.353 [2024-06-10 10:16:08.954222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.353 [2024-06-10 10:16:08.954253] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:03.353 [2024-06-10 10:16:08.954267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.611 pt2 00:12:03.611 10:16:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:03.611 [2024-06-10 10:16:09.201973] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.870 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.129 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.129 "name": "raid_bdev1", 00:12:04.129 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:04.129 "strip_size_kb": 0, 00:12:04.129 "state": "configuring", 00:12:04.129 "raid_level": "raid1", 00:12:04.129 "superblock": true, 00:12:04.129 "num_base_bdevs": 3, 00:12:04.129 "num_base_bdevs_discovered": 1, 00:12:04.129 "num_base_bdevs_operational": 3, 00:12:04.129 "base_bdevs_list": [ 00:12:04.129 { 00:12:04.129 "name": "pt1", 00:12:04.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:04.129 "is_configured": true, 00:12:04.129 "data_offset": 2048, 00:12:04.129 "data_size": 63488 00:12:04.129 }, 00:12:04.129 { 00:12:04.129 "name": null, 00:12:04.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.129 "is_configured": false, 00:12:04.129 "data_offset": 2048, 00:12:04.129 "data_size": 63488 00:12:04.129 }, 00:12:04.129 { 00:12:04.129 "name": null, 00:12:04.129 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.129 "is_configured": false, 00:12:04.129 "data_offset": 2048, 00:12:04.129 "data_size": 63488 00:12:04.129 } 00:12:04.129 ] 00:12:04.129 }' 00:12:04.129 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.129 10:16:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.416 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:04.416 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:04.416 10:16:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:04.675 [2024-06-10 10:16:10.161985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:04.675 [2024-06-10 10:16:10.162045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.675 [2024-06-10 10:16:10.162057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b038680 00:12:04.675 [2024-06-10 10:16:10.162066] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.675 [2024-06-10 10:16:10.162165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.675 [2024-06-10 10:16:10.162174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:04.675 [2024-06-10 10:16:10.162195] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:04.675 [2024-06-10 10:16:10.162202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.675 pt2 00:12:04.675 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:04.675 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:04.675 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.934 [2024-06-10 10:16:10.445977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.934 [2024-06-10 10:16:10.446025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.934 [2024-06-10 10:16:10.446034] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b038400 00:12:04.934 [2024-06-10 10:16:10.446042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.934 [2024-06-10 10:16:10.446108] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.934 [2024-06-10 10:16:10.446117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.934 [2024-06-10 10:16:10.446132] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:04.934 [2024-06-10 10:16:10.446138] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.934 [2024-06-10 10:16:10.446160] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b037780 00:12:04.934 [2024-06-10 10:16:10.446164] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.934 [2024-06-10 10:16:10.446182] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b09ae20 00:12:04.934 [2024-06-10 10:16:10.446224] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b037780 00:12:04.934 [2024-06-10 10:16:10.446228] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b037780 00:12:04.934 [2024-06-10 10:16:10.446245] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.934 pt3 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.934 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.193 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.193 "name": "raid_bdev1", 00:12:05.193 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:05.193 "strip_size_kb": 0, 00:12:05.193 "state": "online", 00:12:05.193 "raid_level": "raid1", 00:12:05.193 "superblock": true, 00:12:05.193 "num_base_bdevs": 3, 00:12:05.193 "num_base_bdevs_discovered": 3, 00:12:05.193 "num_base_bdevs_operational": 3, 00:12:05.193 "base_bdevs_list": [ 00:12:05.193 { 00:12:05.193 "name": "pt1", 00:12:05.193 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.193 "is_configured": true, 00:12:05.193 "data_offset": 2048, 00:12:05.193 "data_size": 63488 00:12:05.193 }, 00:12:05.193 { 00:12:05.193 "name": "pt2", 00:12:05.193 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.193 "is_configured": true, 00:12:05.193 "data_offset": 2048, 00:12:05.193 "data_size": 63488 00:12:05.193 }, 00:12:05.193 { 00:12:05.193 "name": "pt3", 00:12:05.193 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.193 "is_configured": true, 00:12:05.193 "data_offset": 2048, 00:12:05.193 "data_size": 63488 00:12:05.193 } 00:12:05.193 ] 00:12:05.193 }' 00:12:05.193 10:16:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.193 10:16:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.760 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:05.760 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:05.760 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:05.760 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:05.761 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:05.761 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:05.761 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:05.761 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:05.761 [2024-06-10 10:16:11.342048] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.761 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:05.761 "name": "raid_bdev1", 00:12:05.761 "aliases": [ 00:12:05.761 "716f2050-2712-11ef-b084-113036b5c18d" 00:12:05.761 ], 00:12:05.761 "product_name": "Raid Volume", 00:12:05.761 "block_size": 512, 00:12:05.761 "num_blocks": 63488, 00:12:05.761 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:05.761 "assigned_rate_limits": { 00:12:05.761 "rw_ios_per_sec": 0, 00:12:05.761 "rw_mbytes_per_sec": 0, 00:12:05.761 "r_mbytes_per_sec": 0, 00:12:05.761 "w_mbytes_per_sec": 0 00:12:05.761 }, 00:12:05.761 "claimed": false, 00:12:05.761 "zoned": false, 00:12:05.761 "supported_io_types": { 00:12:05.761 "read": true, 00:12:05.761 "write": true, 00:12:05.761 "unmap": false, 00:12:05.761 "write_zeroes": true, 00:12:05.761 "flush": false, 00:12:05.761 "reset": true, 00:12:05.761 "compare": false, 00:12:05.761 "compare_and_write": false, 00:12:05.761 "abort": false, 00:12:05.761 "nvme_admin": false, 00:12:05.761 "nvme_io": false 00:12:05.761 }, 00:12:05.761 "memory_domains": [ 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "system", 00:12:05.761 "dma_device_type": 1 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.761 "dma_device_type": 2 00:12:05.761 } 00:12:05.761 ], 00:12:05.761 "driver_specific": { 00:12:05.761 "raid": { 00:12:05.761 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:05.761 "strip_size_kb": 0, 00:12:05.761 "state": "online", 00:12:05.761 "raid_level": "raid1", 00:12:05.761 "superblock": true, 00:12:05.761 "num_base_bdevs": 3, 00:12:05.761 "num_base_bdevs_discovered": 3, 00:12:05.761 "num_base_bdevs_operational": 3, 00:12:05.761 "base_bdevs_list": [ 00:12:05.761 { 00:12:05.761 "name": "pt1", 00:12:05.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 2048, 00:12:05.761 "data_size": 63488 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "name": "pt2", 00:12:05.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 2048, 00:12:05.761 "data_size": 63488 00:12:05.761 }, 00:12:05.761 { 00:12:05.761 "name": "pt3", 00:12:05.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.761 "is_configured": true, 00:12:05.761 "data_offset": 2048, 00:12:05.761 "data_size": 63488 00:12:05.761 } 00:12:05.761 ] 00:12:05.761 } 00:12:05.761 } 00:12:05.761 }' 00:12:05.761 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.019 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:06.019 pt2 00:12:06.019 pt3' 00:12:06.019 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.019 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:06.019 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.277 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.277 "name": "pt1", 00:12:06.277 "aliases": [ 00:12:06.277 "00000000-0000-0000-0000-000000000001" 00:12:06.277 ], 00:12:06.277 "product_name": "passthru", 00:12:06.277 "block_size": 512, 00:12:06.278 "num_blocks": 65536, 00:12:06.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.278 "assigned_rate_limits": { 00:12:06.278 "rw_ios_per_sec": 0, 00:12:06.278 "rw_mbytes_per_sec": 0, 00:12:06.278 "r_mbytes_per_sec": 0, 00:12:06.278 "w_mbytes_per_sec": 0 00:12:06.278 }, 00:12:06.278 "claimed": true, 00:12:06.278 "claim_type": "exclusive_write", 00:12:06.278 "zoned": false, 00:12:06.278 "supported_io_types": { 00:12:06.278 "read": true, 00:12:06.278 "write": true, 00:12:06.278 "unmap": true, 00:12:06.278 "write_zeroes": true, 00:12:06.278 "flush": true, 00:12:06.278 "reset": true, 00:12:06.278 "compare": false, 00:12:06.278 "compare_and_write": false, 00:12:06.278 "abort": true, 00:12:06.278 "nvme_admin": false, 00:12:06.278 "nvme_io": false 00:12:06.278 }, 00:12:06.278 "memory_domains": [ 00:12:06.278 { 00:12:06.278 "dma_device_id": "system", 00:12:06.278 "dma_device_type": 1 00:12:06.278 }, 00:12:06.278 { 00:12:06.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.278 "dma_device_type": 2 00:12:06.278 } 00:12:06.278 ], 00:12:06.278 "driver_specific": { 00:12:06.278 "passthru": { 00:12:06.278 "name": "pt1", 00:12:06.278 "base_bdev_name": "malloc1" 00:12:06.278 } 00:12:06.278 } 00:12:06.278 }' 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:06.278 10:16:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.537 "name": "pt2", 00:12:06.537 "aliases": [ 00:12:06.537 "00000000-0000-0000-0000-000000000002" 00:12:06.537 ], 00:12:06.537 "product_name": "passthru", 00:12:06.537 "block_size": 512, 00:12:06.537 "num_blocks": 65536, 00:12:06.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.537 "assigned_rate_limits": { 00:12:06.537 "rw_ios_per_sec": 0, 00:12:06.537 "rw_mbytes_per_sec": 0, 00:12:06.537 "r_mbytes_per_sec": 0, 00:12:06.537 "w_mbytes_per_sec": 0 00:12:06.537 }, 00:12:06.537 "claimed": true, 00:12:06.537 "claim_type": "exclusive_write", 00:12:06.537 "zoned": false, 00:12:06.537 "supported_io_types": { 00:12:06.537 "read": true, 00:12:06.537 "write": true, 00:12:06.537 "unmap": true, 00:12:06.537 "write_zeroes": true, 00:12:06.537 "flush": true, 00:12:06.537 "reset": true, 00:12:06.537 "compare": false, 00:12:06.537 "compare_and_write": false, 00:12:06.537 "abort": true, 00:12:06.537 "nvme_admin": false, 00:12:06.537 "nvme_io": false 00:12:06.537 }, 00:12:06.537 "memory_domains": [ 00:12:06.537 { 00:12:06.537 "dma_device_id": "system", 00:12:06.537 "dma_device_type": 1 00:12:06.537 }, 00:12:06.537 { 00:12:06.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.537 "dma_device_type": 2 00:12:06.537 } 00:12:06.537 ], 00:12:06.537 "driver_specific": { 00:12:06.537 "passthru": { 00:12:06.537 "name": "pt2", 00:12:06.537 "base_bdev_name": "malloc2" 00:12:06.537 } 00:12:06.537 } 00:12:06.537 }' 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:06.537 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:06.795 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:06.795 "name": "pt3", 00:12:06.795 "aliases": [ 00:12:06.795 "00000000-0000-0000-0000-000000000003" 00:12:06.795 ], 00:12:06.795 "product_name": "passthru", 00:12:06.795 "block_size": 512, 00:12:06.795 "num_blocks": 65536, 00:12:06.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.795 "assigned_rate_limits": { 00:12:06.795 "rw_ios_per_sec": 0, 00:12:06.795 "rw_mbytes_per_sec": 0, 00:12:06.795 "r_mbytes_per_sec": 0, 00:12:06.795 "w_mbytes_per_sec": 0 00:12:06.795 }, 00:12:06.795 "claimed": true, 00:12:06.795 "claim_type": "exclusive_write", 00:12:06.795 "zoned": false, 00:12:06.795 "supported_io_types": { 00:12:06.795 "read": true, 00:12:06.795 "write": true, 00:12:06.795 "unmap": true, 00:12:06.795 "write_zeroes": true, 00:12:06.795 "flush": true, 00:12:06.795 "reset": true, 00:12:06.795 "compare": false, 00:12:06.795 "compare_and_write": false, 00:12:06.795 "abort": true, 00:12:06.795 "nvme_admin": false, 00:12:06.795 "nvme_io": false 00:12:06.795 }, 00:12:06.795 "memory_domains": [ 00:12:06.795 { 00:12:06.795 "dma_device_id": "system", 00:12:06.795 "dma_device_type": 1 00:12:06.795 }, 00:12:06.795 { 00:12:06.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.795 "dma_device_type": 2 00:12:06.795 } 00:12:06.795 ], 00:12:06.795 "driver_specific": { 00:12:06.795 "passthru": { 00:12:06.795 "name": "pt3", 00:12:06.795 "base_bdev_name": "malloc3" 00:12:06.795 } 00:12:06.795 } 00:12:06.795 }' 00:12:06.795 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.795 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:06.795 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:06.795 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.796 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:06.796 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:07.054 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:07.312 [2024-06-10 10:16:12.718056] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.312 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 716f2050-2712-11ef-b084-113036b5c18d '!=' 716f2050-2712-11ef-b084-113036b5c18d ']' 00:12:07.312 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:07.312 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:07.312 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:07.312 10:16:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:07.570 [2024-06-10 10:16:13.014037] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.570 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.828 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:07.828 "name": "raid_bdev1", 00:12:07.828 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:07.828 "strip_size_kb": 0, 00:12:07.828 "state": "online", 00:12:07.828 "raid_level": "raid1", 00:12:07.828 "superblock": true, 00:12:07.828 "num_base_bdevs": 3, 00:12:07.828 "num_base_bdevs_discovered": 2, 00:12:07.828 "num_base_bdevs_operational": 2, 00:12:07.828 "base_bdevs_list": [ 00:12:07.828 { 00:12:07.828 "name": null, 00:12:07.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.828 "is_configured": false, 00:12:07.828 "data_offset": 2048, 00:12:07.828 "data_size": 63488 00:12:07.828 }, 00:12:07.828 { 00:12:07.828 "name": "pt2", 00:12:07.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.828 "is_configured": true, 00:12:07.828 "data_offset": 2048, 00:12:07.828 "data_size": 63488 00:12:07.828 }, 00:12:07.828 { 00:12:07.828 "name": "pt3", 00:12:07.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.828 "is_configured": true, 00:12:07.828 "data_offset": 2048, 00:12:07.828 "data_size": 63488 00:12:07.828 } 00:12:07.828 ] 00:12:07.828 }' 00:12:07.828 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:07.828 10:16:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:08.655 [2024-06-10 10:16:13.974042] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.655 [2024-06-10 10:16:13.974074] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.655 [2024-06-10 10:16:13.974096] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.655 [2024-06-10 10:16:13.974111] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.655 [2024-06-10 10:16:13.974116] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b037780 name raid_bdev1, state offline 00:12:08.655 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:08.655 10:16:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.919 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:08.919 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:08.919 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:08.919 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:08.919 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:09.178 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:09.178 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:09.178 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:09.437 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:09.437 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:09.437 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:09.437 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:09.437 10:16:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:09.695 [2024-06-10 10:16:15.130060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:09.695 [2024-06-10 10:16:15.130119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.695 [2024-06-10 10:16:15.130132] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b038400 00:12:09.695 [2024-06-10 10:16:15.130140] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.695 [2024-06-10 10:16:15.130660] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.695 [2024-06-10 10:16:15.130689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:09.695 [2024-06-10 10:16:15.130714] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:09.695 [2024-06-10 10:16:15.130725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:09.695 pt2 00:12:09.695 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:09.695 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.696 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.954 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.954 "name": "raid_bdev1", 00:12:09.954 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:09.954 "strip_size_kb": 0, 00:12:09.954 "state": "configuring", 00:12:09.954 "raid_level": "raid1", 00:12:09.954 "superblock": true, 00:12:09.954 "num_base_bdevs": 3, 00:12:09.954 "num_base_bdevs_discovered": 1, 00:12:09.954 "num_base_bdevs_operational": 2, 00:12:09.954 "base_bdevs_list": [ 00:12:09.954 { 00:12:09.954 "name": null, 00:12:09.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.954 "is_configured": false, 00:12:09.954 "data_offset": 2048, 00:12:09.954 "data_size": 63488 00:12:09.954 }, 00:12:09.954 { 00:12:09.954 "name": "pt2", 00:12:09.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.954 "is_configured": true, 00:12:09.954 "data_offset": 2048, 00:12:09.954 "data_size": 63488 00:12:09.954 }, 00:12:09.954 { 00:12:09.954 "name": null, 00:12:09.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.954 "is_configured": false, 00:12:09.954 "data_offset": 2048, 00:12:09.954 "data_size": 63488 00:12:09.954 } 00:12:09.954 ] 00:12:09.954 }' 00:12:09.954 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.954 10:16:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.212 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:12:10.212 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:10.212 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:12:10.212 10:16:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:10.471 [2024-06-10 10:16:16.030078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:10.471 [2024-06-10 10:16:16.030148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.471 [2024-06-10 10:16:16.030165] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b037780 00:12:10.471 [2024-06-10 10:16:16.030176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.471 [2024-06-10 10:16:16.030294] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.471 [2024-06-10 10:16:16.030307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:10.471 [2024-06-10 10:16:16.030338] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:10.471 [2024-06-10 10:16:16.030355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:10.471 [2024-06-10 10:16:16.030402] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b038180 00:12:10.471 [2024-06-10 10:16:16.030411] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.471 [2024-06-10 10:16:16.030449] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b09ae20 00:12:10.471 [2024-06-10 10:16:16.030516] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b038180 00:12:10.471 [2024-06-10 10:16:16.030524] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b038180 00:12:10.471 [2024-06-10 10:16:16.030559] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.471 pt3 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.471 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.729 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.729 "name": "raid_bdev1", 00:12:10.729 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:10.729 "strip_size_kb": 0, 00:12:10.729 "state": "online", 00:12:10.729 "raid_level": "raid1", 00:12:10.729 "superblock": true, 00:12:10.729 "num_base_bdevs": 3, 00:12:10.729 "num_base_bdevs_discovered": 2, 00:12:10.729 "num_base_bdevs_operational": 2, 00:12:10.729 "base_bdevs_list": [ 00:12:10.729 { 00:12:10.729 "name": null, 00:12:10.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.729 "is_configured": false, 00:12:10.729 "data_offset": 2048, 00:12:10.729 "data_size": 63488 00:12:10.729 }, 00:12:10.729 { 00:12:10.729 "name": "pt2", 00:12:10.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.729 "is_configured": true, 00:12:10.729 "data_offset": 2048, 00:12:10.729 "data_size": 63488 00:12:10.729 }, 00:12:10.729 { 00:12:10.729 "name": "pt3", 00:12:10.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.729 "is_configured": true, 00:12:10.729 "data_offset": 2048, 00:12:10.729 "data_size": 63488 00:12:10.729 } 00:12:10.729 ] 00:12:10.729 }' 00:12:10.729 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.729 10:16:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.296 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:11.554 [2024-06-10 10:16:16.926067] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.554 [2024-06-10 10:16:16.926096] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.554 [2024-06-10 10:16:16.926117] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.554 [2024-06-10 10:16:16.926131] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.554 [2024-06-10 10:16:16.926136] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b038180 name raid_bdev1, state offline 00:12:11.554 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.554 10:16:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:11.813 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:11.813 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:11.813 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:12:11.813 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:12:11.813 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:11.813 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.071 [2024-06-10 10:16:17.594096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.071 [2024-06-10 10:16:17.594172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.071 [2024-06-10 10:16:17.594185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b037780 00:12:12.071 [2024-06-10 10:16:17.594193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.071 [2024-06-10 10:16:17.594779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.071 [2024-06-10 10:16:17.594818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.071 [2024-06-10 10:16:17.594842] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:12.071 [2024-06-10 10:16:17.594854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.071 [2024-06-10 10:16:17.594880] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:12.071 [2024-06-10 10:16:17.594884] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.071 [2024-06-10 10:16:17.594889] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b038180 name raid_bdev1, state configuring 00:12:12.071 [2024-06-10 10:16:17.594896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.071 pt1 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.071 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.330 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.330 "name": "raid_bdev1", 00:12:12.330 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:12.330 "strip_size_kb": 0, 00:12:12.330 "state": "configuring", 00:12:12.330 "raid_level": "raid1", 00:12:12.330 "superblock": true, 00:12:12.330 "num_base_bdevs": 3, 00:12:12.330 "num_base_bdevs_discovered": 1, 00:12:12.330 "num_base_bdevs_operational": 2, 00:12:12.330 "base_bdevs_list": [ 00:12:12.330 { 00:12:12.330 "name": null, 00:12:12.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.330 "is_configured": false, 00:12:12.330 "data_offset": 2048, 00:12:12.330 "data_size": 63488 00:12:12.330 }, 00:12:12.330 { 00:12:12.330 "name": "pt2", 00:12:12.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.330 "is_configured": true, 00:12:12.330 "data_offset": 2048, 00:12:12.330 "data_size": 63488 00:12:12.330 }, 00:12:12.330 { 00:12:12.330 "name": null, 00:12:12.330 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.330 "is_configured": false, 00:12:12.330 "data_offset": 2048, 00:12:12.330 "data_size": 63488 00:12:12.330 } 00:12:12.330 ] 00:12:12.330 }' 00:12:12.330 10:16:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.330 10:16:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.896 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:12.896 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:12.896 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:12:12.896 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:13.154 [2024-06-10 10:16:18.718111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:13.155 [2024-06-10 10:16:18.718169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.155 [2024-06-10 10:16:18.718181] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b037c80 00:12:13.155 [2024-06-10 10:16:18.718189] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.155 [2024-06-10 10:16:18.718282] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.155 [2024-06-10 10:16:18.718291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:13.155 [2024-06-10 10:16:18.718311] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:13.155 [2024-06-10 10:16:18.718320] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:13.155 [2024-06-10 10:16:18.718371] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b038180 00:12:13.155 [2024-06-10 10:16:18.718379] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.155 [2024-06-10 10:16:18.718406] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b09ae20 00:12:13.155 [2024-06-10 10:16:18.718441] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b038180 00:12:13.155 [2024-06-10 10:16:18.718445] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b038180 00:12:13.155 [2024-06-10 10:16:18.718462] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.155 pt3 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.155 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.411 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.411 "name": "raid_bdev1", 00:12:13.411 "uuid": "716f2050-2712-11ef-b084-113036b5c18d", 00:12:13.411 "strip_size_kb": 0, 00:12:13.411 "state": "online", 00:12:13.411 "raid_level": "raid1", 00:12:13.411 "superblock": true, 00:12:13.411 "num_base_bdevs": 3, 00:12:13.411 "num_base_bdevs_discovered": 2, 00:12:13.411 "num_base_bdevs_operational": 2, 00:12:13.411 "base_bdevs_list": [ 00:12:13.411 { 00:12:13.411 "name": null, 00:12:13.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.411 "is_configured": false, 00:12:13.411 "data_offset": 2048, 00:12:13.411 "data_size": 63488 00:12:13.411 }, 00:12:13.411 { 00:12:13.411 "name": "pt2", 00:12:13.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.411 "is_configured": true, 00:12:13.411 "data_offset": 2048, 00:12:13.411 "data_size": 63488 00:12:13.411 }, 00:12:13.411 { 00:12:13.411 "name": "pt3", 00:12:13.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.411 "is_configured": true, 00:12:13.411 "data_offset": 2048, 00:12:13.411 "data_size": 63488 00:12:13.411 } 00:12:13.411 ] 00:12:13.411 }' 00:12:13.411 10:16:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.411 10:16:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.977 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:13.977 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:14.235 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:14.235 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:14.235 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:14.494 [2024-06-10 10:16:19.866170] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 716f2050-2712-11ef-b084-113036b5c18d '!=' 716f2050-2712-11ef-b084-113036b5c18d ']' 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 58327 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 58327 ']' 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 58327 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 58327 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:12:14.494 killing process with pid 58327 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 58327' 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 58327 00:12:14.494 [2024-06-10 10:16:19.900867] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.494 [2024-06-10 10:16:19.900904] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.494 [2024-06-10 10:16:19.900920] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.494 [2024-06-10 10:16:19.900926] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b038180 name raid_bdev1, state offline 00:12:14.494 10:16:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 58327 00:12:14.494 [2024-06-10 10:16:19.915218] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.494 10:16:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:14.494 00:12:14.494 real 0m19.280s 00:12:14.494 user 0m35.072s 00:12:14.494 sys 0m2.742s 00:12:14.494 10:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:14.494 10:16:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.494 ************************************ 00:12:14.494 END TEST raid_superblock_test 00:12:14.494 ************************************ 00:12:14.752 10:16:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:14.752 10:16:20 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:14.752 10:16:20 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:14.752 10:16:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.752 ************************************ 00:12:14.752 START TEST raid_read_error_test 00:12:14.752 ************************************ 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 3 read 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.I9TU69Bj 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58881 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58881 /var/tmp/spdk-raid.sock 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 58881 ']' 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:14.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:14.752 10:16:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.752 [2024-06-10 10:16:20.146400] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:14.752 [2024-06-10 10:16:20.146665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:15.010 EAL: TSC is not safe to use in SMP mode 00:12:15.010 EAL: TSC is not invariant 00:12:15.010 [2024-06-10 10:16:20.600507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.269 [2024-06-10 10:16:20.680889] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:15.269 [2024-06-10 10:16:20.683245] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.269 [2024-06-10 10:16:20.684020] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.269 [2024-06-10 10:16:20.684044] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.836 10:16:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:15.836 10:16:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:12:15.836 10:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:15.836 10:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.093 BaseBdev1_malloc 00:12:16.093 10:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:16.351 true 00:12:16.351 10:16:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:16.609 [2024-06-10 10:16:22.167070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:16.609 [2024-06-10 10:16:22.167148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.609 [2024-06-10 10:16:22.167179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ab2d780 00:12:16.609 [2024-06-10 10:16:22.167198] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.609 [2024-06-10 10:16:22.167792] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.609 [2024-06-10 10:16:22.167835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.609 BaseBdev1 00:12:16.609 10:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:16.609 10:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.892 BaseBdev2_malloc 00:12:16.892 10:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:17.148 true 00:12:17.148 10:16:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:17.407 [2024-06-10 10:16:22.995059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:17.407 [2024-06-10 10:16:22.995130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.407 [2024-06-10 10:16:22.995154] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ab2dc80 00:12:17.407 [2024-06-10 10:16:22.995163] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.407 [2024-06-10 10:16:22.995648] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.407 [2024-06-10 10:16:22.995678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.407 BaseBdev2 00:12:17.666 10:16:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:17.666 10:16:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:17.924 BaseBdev3_malloc 00:12:17.924 10:16:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:18.181 true 00:12:18.181 10:16:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:18.440 [2024-06-10 10:16:23.843102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:18.440 [2024-06-10 10:16:23.843165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.440 [2024-06-10 10:16:23.843194] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ab2e180 00:12:18.440 [2024-06-10 10:16:23.843202] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.440 [2024-06-10 10:16:23.843734] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.440 [2024-06-10 10:16:23.843763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:18.440 BaseBdev3 00:12:18.440 10:16:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:18.698 [2024-06-10 10:16:24.151105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.698 [2024-06-10 10:16:24.151529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.698 [2024-06-10 10:16:24.151547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.698 [2024-06-10 10:16:24.151603] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ab2e400 00:12:18.698 [2024-06-10 10:16:24.151608] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:18.698 [2024-06-10 10:16:24.151638] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab99e20 00:12:18.698 [2024-06-10 10:16:24.151733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ab2e400 00:12:18.698 [2024-06-10 10:16:24.151737] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82ab2e400 00:12:18.698 [2024-06-10 10:16:24.151758] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.698 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.956 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:18.956 "name": "raid_bdev1", 00:12:18.956 "uuid": "7d93da3b-2712-11ef-b084-113036b5c18d", 00:12:18.956 "strip_size_kb": 0, 00:12:18.956 "state": "online", 00:12:18.956 "raid_level": "raid1", 00:12:18.956 "superblock": true, 00:12:18.956 "num_base_bdevs": 3, 00:12:18.956 "num_base_bdevs_discovered": 3, 00:12:18.956 "num_base_bdevs_operational": 3, 00:12:18.956 "base_bdevs_list": [ 00:12:18.956 { 00:12:18.956 "name": "BaseBdev1", 00:12:18.956 "uuid": "aacaccd3-8b30-5c5f-a01c-6f721040c110", 00:12:18.956 "is_configured": true, 00:12:18.956 "data_offset": 2048, 00:12:18.956 "data_size": 63488 00:12:18.956 }, 00:12:18.956 { 00:12:18.956 "name": "BaseBdev2", 00:12:18.956 "uuid": "b7245b3e-5eca-9b5f-b12e-4039db70e83d", 00:12:18.956 "is_configured": true, 00:12:18.956 "data_offset": 2048, 00:12:18.956 "data_size": 63488 00:12:18.956 }, 00:12:18.956 { 00:12:18.956 "name": "BaseBdev3", 00:12:18.956 "uuid": "67a91129-a539-2855-87eb-f80fcdf84029", 00:12:18.956 "is_configured": true, 00:12:18.956 "data_offset": 2048, 00:12:18.956 "data_size": 63488 00:12:18.956 } 00:12:18.956 ] 00:12:18.956 }' 00:12:18.956 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:18.956 10:16:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.522 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:19.522 10:16:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:19.522 [2024-06-10 10:16:25.047209] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ab99ec0 00:12:20.506 10:16:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.766 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.026 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.026 "name": "raid_bdev1", 00:12:21.026 "uuid": "7d93da3b-2712-11ef-b084-113036b5c18d", 00:12:21.026 "strip_size_kb": 0, 00:12:21.026 "state": "online", 00:12:21.026 "raid_level": "raid1", 00:12:21.026 "superblock": true, 00:12:21.026 "num_base_bdevs": 3, 00:12:21.026 "num_base_bdevs_discovered": 3, 00:12:21.026 "num_base_bdevs_operational": 3, 00:12:21.026 "base_bdevs_list": [ 00:12:21.026 { 00:12:21.026 "name": "BaseBdev1", 00:12:21.026 "uuid": "aacaccd3-8b30-5c5f-a01c-6f721040c110", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 }, 00:12:21.026 { 00:12:21.026 "name": "BaseBdev2", 00:12:21.026 "uuid": "b7245b3e-5eca-9b5f-b12e-4039db70e83d", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 }, 00:12:21.026 { 00:12:21.026 "name": "BaseBdev3", 00:12:21.026 "uuid": "67a91129-a539-2855-87eb-f80fcdf84029", 00:12:21.026 "is_configured": true, 00:12:21.026 "data_offset": 2048, 00:12:21.026 "data_size": 63488 00:12:21.026 } 00:12:21.026 ] 00:12:21.026 }' 00:12:21.026 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.026 10:16:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.593 10:16:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:21.852 [2024-06-10 10:16:27.245269] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.852 [2024-06-10 10:16:27.245303] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.852 [2024-06-10 10:16:27.245642] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.852 [2024-06-10 10:16:27.245659] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.852 [2024-06-10 10:16:27.245676] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.852 [2024-06-10 10:16:27.245681] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ab2e400 name raid_bdev1, state offline 00:12:21.852 0 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58881 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 58881 ']' 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 58881 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 58881 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:12:21.852 killing process with pid 58881 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 58881' 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 58881 00:12:21.852 [2024-06-10 10:16:27.272078] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.852 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 58881 00:12:21.852 [2024-06-10 10:16:27.287343] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.110 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.I9TU69Bj 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:22.111 00:12:22.111 real 0m7.354s 00:12:22.111 user 0m11.934s 00:12:22.111 sys 0m1.082s 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:22.111 ************************************ 00:12:22.111 END TEST raid_read_error_test 00:12:22.111 ************************************ 00:12:22.111 10:16:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.111 10:16:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:22.111 10:16:27 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:22.111 10:16:27 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:22.111 10:16:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.111 ************************************ 00:12:22.111 START TEST raid_write_error_test 00:12:22.111 ************************************ 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 3 write 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.gn403xaE 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=59016 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 59016 /var/tmp/spdk-raid.sock 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 59016 ']' 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:22.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:22.111 10:16:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.111 [2024-06-10 10:16:27.543481] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:22.111 [2024-06-10 10:16:27.543908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:22.678 EAL: TSC is not safe to use in SMP mode 00:12:22.678 EAL: TSC is not invariant 00:12:22.678 [2024-06-10 10:16:28.008376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.678 [2024-06-10 10:16:28.092960] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:22.678 [2024-06-10 10:16:28.095175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.678 [2024-06-10 10:16:28.095901] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.678 [2024-06-10 10:16:28.095913] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.245 10:16:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:23.245 10:16:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:12:23.245 10:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:23.245 10:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.503 BaseBdev1_malloc 00:12:23.503 10:16:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:23.762 true 00:12:23.762 10:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:24.021 [2024-06-10 10:16:29.531178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:24.021 [2024-06-10 10:16:29.531253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.021 [2024-06-10 10:16:29.531280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a633780 00:12:24.021 [2024-06-10 10:16:29.531288] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.021 [2024-06-10 10:16:29.531806] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.021 [2024-06-10 10:16:29.531828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.021 BaseBdev1 00:12:24.021 10:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:24.021 10:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.280 BaseBdev2_malloc 00:12:24.280 10:16:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:24.539 true 00:12:24.539 10:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:24.797 [2024-06-10 10:16:30.331189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:24.797 [2024-06-10 10:16:30.331259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.797 [2024-06-10 10:16:30.331288] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a633c80 00:12:24.797 [2024-06-10 10:16:30.331296] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.797 [2024-06-10 10:16:30.331998] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.797 [2024-06-10 10:16:30.332054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:24.797 BaseBdev2 00:12:24.797 10:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:24.797 10:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.055 BaseBdev3_malloc 00:12:25.055 10:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:25.314 true 00:12:25.314 10:16:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:25.573 [2024-06-10 10:16:31.035220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:25.573 [2024-06-10 10:16:31.035291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.573 [2024-06-10 10:16:31.035320] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a634180 00:12:25.573 [2024-06-10 10:16:31.035328] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.573 [2024-06-10 10:16:31.035904] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.573 [2024-06-10 10:16:31.035933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.573 BaseBdev3 00:12:25.573 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:25.831 [2024-06-10 10:16:31.295226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.831 [2024-06-10 10:16:31.295711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.831 [2024-06-10 10:16:31.295730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.831 [2024-06-10 10:16:31.295792] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a634400 00:12:25.831 [2024-06-10 10:16:31.295797] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.831 [2024-06-10 10:16:31.295828] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a69fe20 00:12:25.831 [2024-06-10 10:16:31.295925] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a634400 00:12:25.831 [2024-06-10 10:16:31.295929] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a634400 00:12:25.831 [2024-06-10 10:16:31.295954] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.831 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.832 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.832 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.090 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.090 "name": "raid_bdev1", 00:12:26.090 "uuid": "81d5f54d-2712-11ef-b084-113036b5c18d", 00:12:26.090 "strip_size_kb": 0, 00:12:26.090 "state": "online", 00:12:26.090 "raid_level": "raid1", 00:12:26.090 "superblock": true, 00:12:26.090 "num_base_bdevs": 3, 00:12:26.090 "num_base_bdevs_discovered": 3, 00:12:26.090 "num_base_bdevs_operational": 3, 00:12:26.090 "base_bdevs_list": [ 00:12:26.090 { 00:12:26.090 "name": "BaseBdev1", 00:12:26.090 "uuid": "478b252d-0cc8-7a5a-9678-b464d8cc0b82", 00:12:26.090 "is_configured": true, 00:12:26.090 "data_offset": 2048, 00:12:26.090 "data_size": 63488 00:12:26.090 }, 00:12:26.090 { 00:12:26.090 "name": "BaseBdev2", 00:12:26.090 "uuid": "e2b2e67c-3cfe-f85e-b354-18c6bbd2d39a", 00:12:26.090 "is_configured": true, 00:12:26.090 "data_offset": 2048, 00:12:26.090 "data_size": 63488 00:12:26.090 }, 00:12:26.090 { 00:12:26.090 "name": "BaseBdev3", 00:12:26.090 "uuid": "0b083578-fea0-395e-8eef-8aacb08047b6", 00:12:26.090 "is_configured": true, 00:12:26.090 "data_offset": 2048, 00:12:26.090 "data_size": 63488 00:12:26.090 } 00:12:26.090 ] 00:12:26.090 }' 00:12:26.090 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.090 10:16:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.658 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:26.658 10:16:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:26.658 [2024-06-10 10:16:32.075291] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a69fec0 00:12:27.754 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:27.754 [2024-06-10 10:16:33.303444] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:27.754 [2024-06-10 10:16:33.303513] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.755 [2024-06-10 10:16:33.303643] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x82a69fec0 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.755 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.321 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.321 "name": "raid_bdev1", 00:12:28.321 "uuid": "81d5f54d-2712-11ef-b084-113036b5c18d", 00:12:28.321 "strip_size_kb": 0, 00:12:28.321 "state": "online", 00:12:28.321 "raid_level": "raid1", 00:12:28.321 "superblock": true, 00:12:28.321 "num_base_bdevs": 3, 00:12:28.321 "num_base_bdevs_discovered": 2, 00:12:28.321 "num_base_bdevs_operational": 2, 00:12:28.321 "base_bdevs_list": [ 00:12:28.321 { 00:12:28.321 "name": null, 00:12:28.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.321 "is_configured": false, 00:12:28.321 "data_offset": 2048, 00:12:28.321 "data_size": 63488 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "name": "BaseBdev2", 00:12:28.321 "uuid": "e2b2e67c-3cfe-f85e-b354-18c6bbd2d39a", 00:12:28.321 "is_configured": true, 00:12:28.321 "data_offset": 2048, 00:12:28.321 "data_size": 63488 00:12:28.321 }, 00:12:28.321 { 00:12:28.321 "name": "BaseBdev3", 00:12:28.321 "uuid": "0b083578-fea0-395e-8eef-8aacb08047b6", 00:12:28.321 "is_configured": true, 00:12:28.321 "data_offset": 2048, 00:12:28.321 "data_size": 63488 00:12:28.321 } 00:12:28.321 ] 00:12:28.321 }' 00:12:28.321 10:16:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.321 10:16:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.580 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:28.852 [2024-06-10 10:16:34.360037] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.852 [2024-06-10 10:16:34.360078] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:28.852 [2024-06-10 10:16:34.360591] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:28.852 [2024-06-10 10:16:34.360641] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.852 [2024-06-10 10:16:34.360670] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:28.852 [2024-06-10 10:16:34.360681] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a634400 name raid_bdev1, state offline 00:12:28.852 0 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 59016 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 59016 ']' 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 59016 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 59016 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:12:28.852 killing process with pid 59016 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59016' 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 59016 00:12:28.852 [2024-06-10 10:16:34.391870] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.852 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 59016 00:12:28.852 [2024-06-10 10:16:34.406576] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.gn403xaE 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:29.118 00:12:29.118 real 0m7.063s 00:12:29.118 user 0m11.425s 00:12:29.118 sys 0m0.978s 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:29.118 10:16:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.118 ************************************ 00:12:29.118 END TEST raid_write_error_test 00:12:29.118 ************************************ 00:12:29.118 10:16:34 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:29.118 10:16:34 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:29.118 10:16:34 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:29.118 10:16:34 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:29.118 10:16:34 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:29.118 10:16:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.118 ************************************ 00:12:29.118 START TEST raid_state_function_test 00:12:29.118 ************************************ 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 4 false 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=59149 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:29.118 Process raid pid: 59149 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59149' 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 59149 /var/tmp/spdk-raid.sock 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 59149 ']' 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:29.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:29.118 10:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.118 [2024-06-10 10:16:34.635770] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:29.118 [2024-06-10 10:16:34.635964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:29.684 EAL: TSC is not safe to use in SMP mode 00:12:29.684 EAL: TSC is not invariant 00:12:29.684 [2024-06-10 10:16:35.110662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.684 [2024-06-10 10:16:35.228562] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:29.684 [2024-06-10 10:16:35.231263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.684 [2024-06-10 10:16:35.232247] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.684 [2024-06-10 10:16:35.232275] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.249 10:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:30.249 10:16:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:12:30.249 10:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:30.816 [2024-06-10 10:16:36.149384] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.816 [2024-06-10 10:16:36.149474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.816 [2024-06-10 10:16:36.149483] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.816 [2024-06-10 10:16:36.149499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.816 [2024-06-10 10:16:36.149506] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.816 [2024-06-10 10:16:36.149520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.816 [2024-06-10 10:16:36.149527] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.816 [2024-06-10 10:16:36.149539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.816 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.075 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.075 "name": "Existed_Raid", 00:12:31.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.075 "strip_size_kb": 64, 00:12:31.075 "state": "configuring", 00:12:31.075 "raid_level": "raid0", 00:12:31.075 "superblock": false, 00:12:31.075 "num_base_bdevs": 4, 00:12:31.075 "num_base_bdevs_discovered": 0, 00:12:31.075 "num_base_bdevs_operational": 4, 00:12:31.075 "base_bdevs_list": [ 00:12:31.075 { 00:12:31.075 "name": "BaseBdev1", 00:12:31.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.075 "is_configured": false, 00:12:31.075 "data_offset": 0, 00:12:31.075 "data_size": 0 00:12:31.075 }, 00:12:31.075 { 00:12:31.075 "name": "BaseBdev2", 00:12:31.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.075 "is_configured": false, 00:12:31.075 "data_offset": 0, 00:12:31.075 "data_size": 0 00:12:31.075 }, 00:12:31.075 { 00:12:31.075 "name": "BaseBdev3", 00:12:31.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.075 "is_configured": false, 00:12:31.075 "data_offset": 0, 00:12:31.075 "data_size": 0 00:12:31.075 }, 00:12:31.075 { 00:12:31.075 "name": "BaseBdev4", 00:12:31.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.075 "is_configured": false, 00:12:31.075 "data_offset": 0, 00:12:31.075 "data_size": 0 00:12:31.075 } 00:12:31.075 ] 00:12:31.075 }' 00:12:31.075 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.075 10:16:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.333 10:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:31.591 [2024-06-10 10:16:37.125361] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.591 [2024-06-10 10:16:37.125391] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d315500 name Existed_Raid, state configuring 00:12:31.591 10:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:31.849 [2024-06-10 10:16:37.377383] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:31.849 [2024-06-10 10:16:37.377446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:31.849 [2024-06-10 10:16:37.377452] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.849 [2024-06-10 10:16:37.377460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.849 [2024-06-10 10:16:37.377472] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:31.849 [2024-06-10 10:16:37.377480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:31.849 [2024-06-10 10:16:37.377483] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:31.849 [2024-06-10 10:16:37.377490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:31.849 10:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:32.107 [2024-06-10 10:16:37.710389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.379 BaseBdev1 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:32.379 10:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.638 10:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:32.897 [ 00:12:32.897 { 00:12:32.897 "name": "BaseBdev1", 00:12:32.897 "aliases": [ 00:12:32.897 "85a8afa4-2712-11ef-b084-113036b5c18d" 00:12:32.897 ], 00:12:32.897 "product_name": "Malloc disk", 00:12:32.897 "block_size": 512, 00:12:32.897 "num_blocks": 65536, 00:12:32.897 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:32.897 "assigned_rate_limits": { 00:12:32.897 "rw_ios_per_sec": 0, 00:12:32.897 "rw_mbytes_per_sec": 0, 00:12:32.897 "r_mbytes_per_sec": 0, 00:12:32.897 "w_mbytes_per_sec": 0 00:12:32.897 }, 00:12:32.897 "claimed": true, 00:12:32.897 "claim_type": "exclusive_write", 00:12:32.897 "zoned": false, 00:12:32.897 "supported_io_types": { 00:12:32.897 "read": true, 00:12:32.897 "write": true, 00:12:32.897 "unmap": true, 00:12:32.897 "write_zeroes": true, 00:12:32.897 "flush": true, 00:12:32.897 "reset": true, 00:12:32.897 "compare": false, 00:12:32.897 "compare_and_write": false, 00:12:32.897 "abort": true, 00:12:32.897 "nvme_admin": false, 00:12:32.897 "nvme_io": false 00:12:32.897 }, 00:12:32.897 "memory_domains": [ 00:12:32.897 { 00:12:32.897 "dma_device_id": "system", 00:12:32.897 "dma_device_type": 1 00:12:32.897 }, 00:12:32.897 { 00:12:32.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.897 "dma_device_type": 2 00:12:32.897 } 00:12:32.897 ], 00:12:32.897 "driver_specific": {} 00:12:32.897 } 00:12:32.897 ] 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.897 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.156 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:33.156 "name": "Existed_Raid", 00:12:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.156 "strip_size_kb": 64, 00:12:33.156 "state": "configuring", 00:12:33.156 "raid_level": "raid0", 00:12:33.156 "superblock": false, 00:12:33.156 "num_base_bdevs": 4, 00:12:33.156 "num_base_bdevs_discovered": 1, 00:12:33.156 "num_base_bdevs_operational": 4, 00:12:33.156 "base_bdevs_list": [ 00:12:33.156 { 00:12:33.156 "name": "BaseBdev1", 00:12:33.156 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:33.156 "is_configured": true, 00:12:33.156 "data_offset": 0, 00:12:33.156 "data_size": 65536 00:12:33.156 }, 00:12:33.156 { 00:12:33.156 "name": "BaseBdev2", 00:12:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.156 "is_configured": false, 00:12:33.156 "data_offset": 0, 00:12:33.156 "data_size": 0 00:12:33.156 }, 00:12:33.156 { 00:12:33.156 "name": "BaseBdev3", 00:12:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.156 "is_configured": false, 00:12:33.156 "data_offset": 0, 00:12:33.156 "data_size": 0 00:12:33.156 }, 00:12:33.156 { 00:12:33.156 "name": "BaseBdev4", 00:12:33.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.156 "is_configured": false, 00:12:33.156 "data_offset": 0, 00:12:33.156 "data_size": 0 00:12:33.156 } 00:12:33.156 ] 00:12:33.156 }' 00:12:33.156 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:33.156 10:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.415 10:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:33.682 [2024-06-10 10:16:39.269435] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:33.682 [2024-06-10 10:16:39.269473] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d315500 name Existed_Raid, state configuring 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:33.940 [2024-06-10 10:16:39.501449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.940 [2024-06-10 10:16:39.502191] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:33.940 [2024-06-10 10:16:39.502236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:33.940 [2024-06-10 10:16:39.502241] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:33.940 [2024-06-10 10:16:39.502249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:33.940 [2024-06-10 10:16:39.502253] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:33.940 [2024-06-10 10:16:39.502260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.940 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.211 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.211 "name": "Existed_Raid", 00:12:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.211 "strip_size_kb": 64, 00:12:34.211 "state": "configuring", 00:12:34.211 "raid_level": "raid0", 00:12:34.211 "superblock": false, 00:12:34.211 "num_base_bdevs": 4, 00:12:34.211 "num_base_bdevs_discovered": 1, 00:12:34.211 "num_base_bdevs_operational": 4, 00:12:34.211 "base_bdevs_list": [ 00:12:34.211 { 00:12:34.211 "name": "BaseBdev1", 00:12:34.211 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:34.211 "is_configured": true, 00:12:34.211 "data_offset": 0, 00:12:34.211 "data_size": 65536 00:12:34.211 }, 00:12:34.211 { 00:12:34.211 "name": "BaseBdev2", 00:12:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.211 "is_configured": false, 00:12:34.211 "data_offset": 0, 00:12:34.211 "data_size": 0 00:12:34.211 }, 00:12:34.211 { 00:12:34.211 "name": "BaseBdev3", 00:12:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.211 "is_configured": false, 00:12:34.211 "data_offset": 0, 00:12:34.211 "data_size": 0 00:12:34.211 }, 00:12:34.211 { 00:12:34.211 "name": "BaseBdev4", 00:12:34.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.211 "is_configured": false, 00:12:34.211 "data_offset": 0, 00:12:34.211 "data_size": 0 00:12:34.211 } 00:12:34.211 ] 00:12:34.211 }' 00:12:34.211 10:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.211 10:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.778 [2024-06-10 10:16:40.325672] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.778 BaseBdev2 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:34.778 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:35.345 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:35.628 [ 00:12:35.628 { 00:12:35.628 "name": "BaseBdev2", 00:12:35.628 "aliases": [ 00:12:35.628 "8737de91-2712-11ef-b084-113036b5c18d" 00:12:35.628 ], 00:12:35.628 "product_name": "Malloc disk", 00:12:35.628 "block_size": 512, 00:12:35.628 "num_blocks": 65536, 00:12:35.628 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:35.628 "assigned_rate_limits": { 00:12:35.628 "rw_ios_per_sec": 0, 00:12:35.628 "rw_mbytes_per_sec": 0, 00:12:35.628 "r_mbytes_per_sec": 0, 00:12:35.628 "w_mbytes_per_sec": 0 00:12:35.628 }, 00:12:35.628 "claimed": true, 00:12:35.628 "claim_type": "exclusive_write", 00:12:35.628 "zoned": false, 00:12:35.628 "supported_io_types": { 00:12:35.628 "read": true, 00:12:35.628 "write": true, 00:12:35.628 "unmap": true, 00:12:35.628 "write_zeroes": true, 00:12:35.628 "flush": true, 00:12:35.628 "reset": true, 00:12:35.628 "compare": false, 00:12:35.628 "compare_and_write": false, 00:12:35.628 "abort": true, 00:12:35.628 "nvme_admin": false, 00:12:35.628 "nvme_io": false 00:12:35.628 }, 00:12:35.628 "memory_domains": [ 00:12:35.628 { 00:12:35.628 "dma_device_id": "system", 00:12:35.628 "dma_device_type": 1 00:12:35.628 }, 00:12:35.628 { 00:12:35.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.628 "dma_device_type": 2 00:12:35.628 } 00:12:35.628 ], 00:12:35.628 "driver_specific": {} 00:12:35.628 } 00:12:35.628 ] 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.628 10:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.897 10:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.897 "name": "Existed_Raid", 00:12:35.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.897 "strip_size_kb": 64, 00:12:35.897 "state": "configuring", 00:12:35.897 "raid_level": "raid0", 00:12:35.897 "superblock": false, 00:12:35.897 "num_base_bdevs": 4, 00:12:35.897 "num_base_bdevs_discovered": 2, 00:12:35.897 "num_base_bdevs_operational": 4, 00:12:35.897 "base_bdevs_list": [ 00:12:35.897 { 00:12:35.897 "name": "BaseBdev1", 00:12:35.897 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:35.897 "is_configured": true, 00:12:35.897 "data_offset": 0, 00:12:35.897 "data_size": 65536 00:12:35.897 }, 00:12:35.897 { 00:12:35.897 "name": "BaseBdev2", 00:12:35.897 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:35.897 "is_configured": true, 00:12:35.897 "data_offset": 0, 00:12:35.897 "data_size": 65536 00:12:35.897 }, 00:12:35.897 { 00:12:35.897 "name": "BaseBdev3", 00:12:35.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.897 "is_configured": false, 00:12:35.897 "data_offset": 0, 00:12:35.897 "data_size": 0 00:12:35.897 }, 00:12:35.897 { 00:12:35.897 "name": "BaseBdev4", 00:12:35.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.897 "is_configured": false, 00:12:35.897 "data_offset": 0, 00:12:35.897 "data_size": 0 00:12:35.897 } 00:12:35.897 ] 00:12:35.897 }' 00:12:35.897 10:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.897 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.164 10:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.422 [2024-06-10 10:16:41.965638] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.422 BaseBdev3 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:36.422 10:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:36.681 10:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:37.247 [ 00:12:37.247 { 00:12:37.247 "name": "BaseBdev3", 00:12:37.247 "aliases": [ 00:12:37.247 "88321e9d-2712-11ef-b084-113036b5c18d" 00:12:37.247 ], 00:12:37.247 "product_name": "Malloc disk", 00:12:37.247 "block_size": 512, 00:12:37.247 "num_blocks": 65536, 00:12:37.247 "uuid": "88321e9d-2712-11ef-b084-113036b5c18d", 00:12:37.247 "assigned_rate_limits": { 00:12:37.247 "rw_ios_per_sec": 0, 00:12:37.247 "rw_mbytes_per_sec": 0, 00:12:37.247 "r_mbytes_per_sec": 0, 00:12:37.247 "w_mbytes_per_sec": 0 00:12:37.247 }, 00:12:37.247 "claimed": true, 00:12:37.247 "claim_type": "exclusive_write", 00:12:37.247 "zoned": false, 00:12:37.247 "supported_io_types": { 00:12:37.247 "read": true, 00:12:37.247 "write": true, 00:12:37.247 "unmap": true, 00:12:37.247 "write_zeroes": true, 00:12:37.247 "flush": true, 00:12:37.247 "reset": true, 00:12:37.247 "compare": false, 00:12:37.247 "compare_and_write": false, 00:12:37.247 "abort": true, 00:12:37.247 "nvme_admin": false, 00:12:37.247 "nvme_io": false 00:12:37.247 }, 00:12:37.247 "memory_domains": [ 00:12:37.247 { 00:12:37.247 "dma_device_id": "system", 00:12:37.247 "dma_device_type": 1 00:12:37.247 }, 00:12:37.247 { 00:12:37.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.247 "dma_device_type": 2 00:12:37.247 } 00:12:37.247 ], 00:12:37.247 "driver_specific": {} 00:12:37.247 } 00:12:37.247 ] 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.247 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.506 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:37.506 "name": "Existed_Raid", 00:12:37.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.506 "strip_size_kb": 64, 00:12:37.506 "state": "configuring", 00:12:37.506 "raid_level": "raid0", 00:12:37.506 "superblock": false, 00:12:37.506 "num_base_bdevs": 4, 00:12:37.506 "num_base_bdevs_discovered": 3, 00:12:37.506 "num_base_bdevs_operational": 4, 00:12:37.506 "base_bdevs_list": [ 00:12:37.506 { 00:12:37.506 "name": "BaseBdev1", 00:12:37.506 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:37.506 "is_configured": true, 00:12:37.506 "data_offset": 0, 00:12:37.506 "data_size": 65536 00:12:37.506 }, 00:12:37.506 { 00:12:37.506 "name": "BaseBdev2", 00:12:37.506 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:37.506 "is_configured": true, 00:12:37.506 "data_offset": 0, 00:12:37.506 "data_size": 65536 00:12:37.506 }, 00:12:37.506 { 00:12:37.506 "name": "BaseBdev3", 00:12:37.506 "uuid": "88321e9d-2712-11ef-b084-113036b5c18d", 00:12:37.506 "is_configured": true, 00:12:37.506 "data_offset": 0, 00:12:37.506 "data_size": 65536 00:12:37.506 }, 00:12:37.506 { 00:12:37.506 "name": "BaseBdev4", 00:12:37.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.506 "is_configured": false, 00:12:37.506 "data_offset": 0, 00:12:37.506 "data_size": 0 00:12:37.506 } 00:12:37.506 ] 00:12:37.506 }' 00:12:37.506 10:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:37.506 10:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.764 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:38.022 [2024-06-10 10:16:43.429700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:38.022 [2024-06-10 10:16:43.429729] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d315a00 00:12:38.022 [2024-06-10 10:16:43.429733] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:38.022 [2024-06-10 10:16:43.429763] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d378ec0 00:12:38.022 [2024-06-10 10:16:43.429849] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d315a00 00:12:38.022 [2024-06-10 10:16:43.429861] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d315a00 00:12:38.022 [2024-06-10 10:16:43.429893] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.022 BaseBdev4 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:38.022 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:38.280 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:38.539 [ 00:12:38.539 { 00:12:38.539 "name": "BaseBdev4", 00:12:38.539 "aliases": [ 00:12:38.539 "89118486-2712-11ef-b084-113036b5c18d" 00:12:38.539 ], 00:12:38.539 "product_name": "Malloc disk", 00:12:38.539 "block_size": 512, 00:12:38.539 "num_blocks": 65536, 00:12:38.539 "uuid": "89118486-2712-11ef-b084-113036b5c18d", 00:12:38.539 "assigned_rate_limits": { 00:12:38.539 "rw_ios_per_sec": 0, 00:12:38.539 "rw_mbytes_per_sec": 0, 00:12:38.539 "r_mbytes_per_sec": 0, 00:12:38.539 "w_mbytes_per_sec": 0 00:12:38.539 }, 00:12:38.539 "claimed": true, 00:12:38.539 "claim_type": "exclusive_write", 00:12:38.539 "zoned": false, 00:12:38.539 "supported_io_types": { 00:12:38.539 "read": true, 00:12:38.539 "write": true, 00:12:38.539 "unmap": true, 00:12:38.539 "write_zeroes": true, 00:12:38.539 "flush": true, 00:12:38.539 "reset": true, 00:12:38.539 "compare": false, 00:12:38.539 "compare_and_write": false, 00:12:38.539 "abort": true, 00:12:38.539 "nvme_admin": false, 00:12:38.539 "nvme_io": false 00:12:38.539 }, 00:12:38.539 "memory_domains": [ 00:12:38.539 { 00:12:38.539 "dma_device_id": "system", 00:12:38.539 "dma_device_type": 1 00:12:38.539 }, 00:12:38.539 { 00:12:38.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.539 "dma_device_type": 2 00:12:38.539 } 00:12:38.539 ], 00:12:38.539 "driver_specific": {} 00:12:38.539 } 00:12:38.539 ] 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.539 10:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.797 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:38.797 "name": "Existed_Raid", 00:12:38.797 "uuid": "89118a7c-2712-11ef-b084-113036b5c18d", 00:12:38.797 "strip_size_kb": 64, 00:12:38.797 "state": "online", 00:12:38.797 "raid_level": "raid0", 00:12:38.797 "superblock": false, 00:12:38.797 "num_base_bdevs": 4, 00:12:38.797 "num_base_bdevs_discovered": 4, 00:12:38.797 "num_base_bdevs_operational": 4, 00:12:38.797 "base_bdevs_list": [ 00:12:38.797 { 00:12:38.797 "name": "BaseBdev1", 00:12:38.797 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:38.797 "is_configured": true, 00:12:38.797 "data_offset": 0, 00:12:38.797 "data_size": 65536 00:12:38.797 }, 00:12:38.797 { 00:12:38.797 "name": "BaseBdev2", 00:12:38.797 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:38.797 "is_configured": true, 00:12:38.797 "data_offset": 0, 00:12:38.797 "data_size": 65536 00:12:38.797 }, 00:12:38.797 { 00:12:38.797 "name": "BaseBdev3", 00:12:38.797 "uuid": "88321e9d-2712-11ef-b084-113036b5c18d", 00:12:38.797 "is_configured": true, 00:12:38.797 "data_offset": 0, 00:12:38.797 "data_size": 65536 00:12:38.797 }, 00:12:38.797 { 00:12:38.797 "name": "BaseBdev4", 00:12:38.797 "uuid": "89118486-2712-11ef-b084-113036b5c18d", 00:12:38.797 "is_configured": true, 00:12:38.797 "data_offset": 0, 00:12:38.797 "data_size": 65536 00:12:38.797 } 00:12:38.797 ] 00:12:38.797 }' 00:12:38.797 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:38.797 10:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:39.364 10:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:39.622 [2024-06-10 10:16:45.033764] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.622 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:39.622 "name": "Existed_Raid", 00:12:39.622 "aliases": [ 00:12:39.622 "89118a7c-2712-11ef-b084-113036b5c18d" 00:12:39.622 ], 00:12:39.622 "product_name": "Raid Volume", 00:12:39.622 "block_size": 512, 00:12:39.622 "num_blocks": 262144, 00:12:39.622 "uuid": "89118a7c-2712-11ef-b084-113036b5c18d", 00:12:39.622 "assigned_rate_limits": { 00:12:39.622 "rw_ios_per_sec": 0, 00:12:39.622 "rw_mbytes_per_sec": 0, 00:12:39.622 "r_mbytes_per_sec": 0, 00:12:39.622 "w_mbytes_per_sec": 0 00:12:39.622 }, 00:12:39.622 "claimed": false, 00:12:39.622 "zoned": false, 00:12:39.622 "supported_io_types": { 00:12:39.622 "read": true, 00:12:39.622 "write": true, 00:12:39.622 "unmap": true, 00:12:39.622 "write_zeroes": true, 00:12:39.622 "flush": true, 00:12:39.622 "reset": true, 00:12:39.622 "compare": false, 00:12:39.622 "compare_and_write": false, 00:12:39.622 "abort": false, 00:12:39.622 "nvme_admin": false, 00:12:39.622 "nvme_io": false 00:12:39.622 }, 00:12:39.622 "memory_domains": [ 00:12:39.622 { 00:12:39.622 "dma_device_id": "system", 00:12:39.622 "dma_device_type": 1 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.622 "dma_device_type": 2 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "system", 00:12:39.622 "dma_device_type": 1 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.622 "dma_device_type": 2 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "system", 00:12:39.622 "dma_device_type": 1 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.622 "dma_device_type": 2 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "system", 00:12:39.622 "dma_device_type": 1 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.622 "dma_device_type": 2 00:12:39.622 } 00:12:39.622 ], 00:12:39.622 "driver_specific": { 00:12:39.622 "raid": { 00:12:39.622 "uuid": "89118a7c-2712-11ef-b084-113036b5c18d", 00:12:39.622 "strip_size_kb": 64, 00:12:39.622 "state": "online", 00:12:39.622 "raid_level": "raid0", 00:12:39.622 "superblock": false, 00:12:39.622 "num_base_bdevs": 4, 00:12:39.622 "num_base_bdevs_discovered": 4, 00:12:39.622 "num_base_bdevs_operational": 4, 00:12:39.622 "base_bdevs_list": [ 00:12:39.622 { 00:12:39.622 "name": "BaseBdev1", 00:12:39.622 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "name": "BaseBdev2", 00:12:39.622 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "name": "BaseBdev3", 00:12:39.622 "uuid": "88321e9d-2712-11ef-b084-113036b5c18d", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 }, 00:12:39.622 { 00:12:39.622 "name": "BaseBdev4", 00:12:39.622 "uuid": "89118486-2712-11ef-b084-113036b5c18d", 00:12:39.622 "is_configured": true, 00:12:39.622 "data_offset": 0, 00:12:39.622 "data_size": 65536 00:12:39.622 } 00:12:39.622 ] 00:12:39.622 } 00:12:39.622 } 00:12:39.622 }' 00:12:39.622 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.622 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:39.622 BaseBdev2 00:12:39.622 BaseBdev3 00:12:39.622 BaseBdev4' 00:12:39.622 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:39.622 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:39.622 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:39.881 "name": "BaseBdev1", 00:12:39.881 "aliases": [ 00:12:39.881 "85a8afa4-2712-11ef-b084-113036b5c18d" 00:12:39.881 ], 00:12:39.881 "product_name": "Malloc disk", 00:12:39.881 "block_size": 512, 00:12:39.881 "num_blocks": 65536, 00:12:39.881 "uuid": "85a8afa4-2712-11ef-b084-113036b5c18d", 00:12:39.881 "assigned_rate_limits": { 00:12:39.881 "rw_ios_per_sec": 0, 00:12:39.881 "rw_mbytes_per_sec": 0, 00:12:39.881 "r_mbytes_per_sec": 0, 00:12:39.881 "w_mbytes_per_sec": 0 00:12:39.881 }, 00:12:39.881 "claimed": true, 00:12:39.881 "claim_type": "exclusive_write", 00:12:39.881 "zoned": false, 00:12:39.881 "supported_io_types": { 00:12:39.881 "read": true, 00:12:39.881 "write": true, 00:12:39.881 "unmap": true, 00:12:39.881 "write_zeroes": true, 00:12:39.881 "flush": true, 00:12:39.881 "reset": true, 00:12:39.881 "compare": false, 00:12:39.881 "compare_and_write": false, 00:12:39.881 "abort": true, 00:12:39.881 "nvme_admin": false, 00:12:39.881 "nvme_io": false 00:12:39.881 }, 00:12:39.881 "memory_domains": [ 00:12:39.881 { 00:12:39.881 "dma_device_id": "system", 00:12:39.881 "dma_device_type": 1 00:12:39.881 }, 00:12:39.881 { 00:12:39.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.881 "dma_device_type": 2 00:12:39.881 } 00:12:39.881 ], 00:12:39.881 "driver_specific": {} 00:12:39.881 }' 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:39.881 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:40.139 "name": "BaseBdev2", 00:12:40.139 "aliases": [ 00:12:40.139 "8737de91-2712-11ef-b084-113036b5c18d" 00:12:40.139 ], 00:12:40.139 "product_name": "Malloc disk", 00:12:40.139 "block_size": 512, 00:12:40.139 "num_blocks": 65536, 00:12:40.139 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:40.139 "assigned_rate_limits": { 00:12:40.139 "rw_ios_per_sec": 0, 00:12:40.139 "rw_mbytes_per_sec": 0, 00:12:40.139 "r_mbytes_per_sec": 0, 00:12:40.139 "w_mbytes_per_sec": 0 00:12:40.139 }, 00:12:40.139 "claimed": true, 00:12:40.139 "claim_type": "exclusive_write", 00:12:40.139 "zoned": false, 00:12:40.139 "supported_io_types": { 00:12:40.139 "read": true, 00:12:40.139 "write": true, 00:12:40.139 "unmap": true, 00:12:40.139 "write_zeroes": true, 00:12:40.139 "flush": true, 00:12:40.139 "reset": true, 00:12:40.139 "compare": false, 00:12:40.139 "compare_and_write": false, 00:12:40.139 "abort": true, 00:12:40.139 "nvme_admin": false, 00:12:40.139 "nvme_io": false 00:12:40.139 }, 00:12:40.139 "memory_domains": [ 00:12:40.139 { 00:12:40.139 "dma_device_id": "system", 00:12:40.139 "dma_device_type": 1 00:12:40.139 }, 00:12:40.139 { 00:12:40.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.139 "dma_device_type": 2 00:12:40.139 } 00:12:40.139 ], 00:12:40.139 "driver_specific": {} 00:12:40.139 }' 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:40.139 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:40.398 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:40.398 "name": "BaseBdev3", 00:12:40.398 "aliases": [ 00:12:40.398 "88321e9d-2712-11ef-b084-113036b5c18d" 00:12:40.398 ], 00:12:40.398 "product_name": "Malloc disk", 00:12:40.398 "block_size": 512, 00:12:40.398 "num_blocks": 65536, 00:12:40.398 "uuid": "88321e9d-2712-11ef-b084-113036b5c18d", 00:12:40.398 "assigned_rate_limits": { 00:12:40.398 "rw_ios_per_sec": 0, 00:12:40.398 "rw_mbytes_per_sec": 0, 00:12:40.398 "r_mbytes_per_sec": 0, 00:12:40.398 "w_mbytes_per_sec": 0 00:12:40.398 }, 00:12:40.398 "claimed": true, 00:12:40.398 "claim_type": "exclusive_write", 00:12:40.398 "zoned": false, 00:12:40.398 "supported_io_types": { 00:12:40.398 "read": true, 00:12:40.398 "write": true, 00:12:40.398 "unmap": true, 00:12:40.398 "write_zeroes": true, 00:12:40.398 "flush": true, 00:12:40.398 "reset": true, 00:12:40.398 "compare": false, 00:12:40.398 "compare_and_write": false, 00:12:40.398 "abort": true, 00:12:40.398 "nvme_admin": false, 00:12:40.398 "nvme_io": false 00:12:40.398 }, 00:12:40.398 "memory_domains": [ 00:12:40.398 { 00:12:40.398 "dma_device_id": "system", 00:12:40.398 "dma_device_type": 1 00:12:40.398 }, 00:12:40.398 { 00:12:40.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.398 "dma_device_type": 2 00:12:40.398 } 00:12:40.398 ], 00:12:40.398 "driver_specific": {} 00:12:40.398 }' 00:12:40.398 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.398 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.398 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:40.398 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.398 10:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:40.657 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:40.915 "name": "BaseBdev4", 00:12:40.915 "aliases": [ 00:12:40.915 "89118486-2712-11ef-b084-113036b5c18d" 00:12:40.915 ], 00:12:40.915 "product_name": "Malloc disk", 00:12:40.915 "block_size": 512, 00:12:40.915 "num_blocks": 65536, 00:12:40.915 "uuid": "89118486-2712-11ef-b084-113036b5c18d", 00:12:40.915 "assigned_rate_limits": { 00:12:40.915 "rw_ios_per_sec": 0, 00:12:40.915 "rw_mbytes_per_sec": 0, 00:12:40.915 "r_mbytes_per_sec": 0, 00:12:40.915 "w_mbytes_per_sec": 0 00:12:40.915 }, 00:12:40.915 "claimed": true, 00:12:40.915 "claim_type": "exclusive_write", 00:12:40.915 "zoned": false, 00:12:40.915 "supported_io_types": { 00:12:40.915 "read": true, 00:12:40.915 "write": true, 00:12:40.915 "unmap": true, 00:12:40.915 "write_zeroes": true, 00:12:40.915 "flush": true, 00:12:40.915 "reset": true, 00:12:40.915 "compare": false, 00:12:40.915 "compare_and_write": false, 00:12:40.915 "abort": true, 00:12:40.915 "nvme_admin": false, 00:12:40.915 "nvme_io": false 00:12:40.915 }, 00:12:40.915 "memory_domains": [ 00:12:40.915 { 00:12:40.915 "dma_device_id": "system", 00:12:40.915 "dma_device_type": 1 00:12:40.915 }, 00:12:40.915 { 00:12:40.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.915 "dma_device_type": 2 00:12:40.915 } 00:12:40.915 ], 00:12:40.915 "driver_specific": {} 00:12:40.915 }' 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:40.915 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:41.482 [2024-06-10 10:16:46.869734] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.482 [2024-06-10 10:16:46.869763] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.482 [2024-06-10 10:16:46.869778] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.482 10:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.739 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.739 "name": "Existed_Raid", 00:12:41.739 "uuid": "89118a7c-2712-11ef-b084-113036b5c18d", 00:12:41.739 "strip_size_kb": 64, 00:12:41.739 "state": "offline", 00:12:41.739 "raid_level": "raid0", 00:12:41.739 "superblock": false, 00:12:41.739 "num_base_bdevs": 4, 00:12:41.740 "num_base_bdevs_discovered": 3, 00:12:41.740 "num_base_bdevs_operational": 3, 00:12:41.740 "base_bdevs_list": [ 00:12:41.740 { 00:12:41.740 "name": null, 00:12:41.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.740 "is_configured": false, 00:12:41.740 "data_offset": 0, 00:12:41.740 "data_size": 65536 00:12:41.740 }, 00:12:41.740 { 00:12:41.740 "name": "BaseBdev2", 00:12:41.740 "uuid": "8737de91-2712-11ef-b084-113036b5c18d", 00:12:41.740 "is_configured": true, 00:12:41.740 "data_offset": 0, 00:12:41.740 "data_size": 65536 00:12:41.740 }, 00:12:41.740 { 00:12:41.740 "name": "BaseBdev3", 00:12:41.740 "uuid": "88321e9d-2712-11ef-b084-113036b5c18d", 00:12:41.740 "is_configured": true, 00:12:41.740 "data_offset": 0, 00:12:41.740 "data_size": 65536 00:12:41.740 }, 00:12:41.740 { 00:12:41.740 "name": "BaseBdev4", 00:12:41.740 "uuid": "89118486-2712-11ef-b084-113036b5c18d", 00:12:41.740 "is_configured": true, 00:12:41.740 "data_offset": 0, 00:12:41.740 "data_size": 65536 00:12:41.740 } 00:12:41.740 ] 00:12:41.740 }' 00:12:41.740 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.740 10:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.998 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:41.998 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:41.998 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.998 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:42.256 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:42.256 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.256 10:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:42.515 [2024-06-10 10:16:48.050689] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.515 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:42.515 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:42.515 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.515 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:43.081 [2024-06-10 10:16:48.631602] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.081 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:43.339 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:43.339 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.339 10:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:43.596 [2024-06-10 10:16:49.168495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:43.596 [2024-06-10 10:16:49.168539] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d315a00 name Existed_Raid, state offline 00:12:43.596 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:43.596 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:43.596 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.596 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:44.162 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:44.162 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:44.162 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:12:44.162 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:44.162 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:44.162 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:44.420 BaseBdev2 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:44.420 10:16:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:44.679 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:44.937 [ 00:12:44.937 { 00:12:44.937 "name": "BaseBdev2", 00:12:44.937 "aliases": [ 00:12:44.937 "8ce171f9-2712-11ef-b084-113036b5c18d" 00:12:44.937 ], 00:12:44.937 "product_name": "Malloc disk", 00:12:44.937 "block_size": 512, 00:12:44.937 "num_blocks": 65536, 00:12:44.937 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:44.937 "assigned_rate_limits": { 00:12:44.937 "rw_ios_per_sec": 0, 00:12:44.937 "rw_mbytes_per_sec": 0, 00:12:44.937 "r_mbytes_per_sec": 0, 00:12:44.937 "w_mbytes_per_sec": 0 00:12:44.937 }, 00:12:44.937 "claimed": false, 00:12:44.937 "zoned": false, 00:12:44.937 "supported_io_types": { 00:12:44.937 "read": true, 00:12:44.937 "write": true, 00:12:44.937 "unmap": true, 00:12:44.937 "write_zeroes": true, 00:12:44.937 "flush": true, 00:12:44.937 "reset": true, 00:12:44.937 "compare": false, 00:12:44.937 "compare_and_write": false, 00:12:44.937 "abort": true, 00:12:44.938 "nvme_admin": false, 00:12:44.938 "nvme_io": false 00:12:44.938 }, 00:12:44.938 "memory_domains": [ 00:12:44.938 { 00:12:44.938 "dma_device_id": "system", 00:12:44.938 "dma_device_type": 1 00:12:44.938 }, 00:12:44.938 { 00:12:44.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.938 "dma_device_type": 2 00:12:44.938 } 00:12:44.938 ], 00:12:44.938 "driver_specific": {} 00:12:44.938 } 00:12:44.938 ] 00:12:44.938 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:44.938 10:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:44.938 10:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:44.938 10:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:45.196 BaseBdev3 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:45.196 10:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:45.455 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:45.713 [ 00:12:45.713 { 00:12:45.713 "name": "BaseBdev3", 00:12:45.713 "aliases": [ 00:12:45.713 "8d668219-2712-11ef-b084-113036b5c18d" 00:12:45.713 ], 00:12:45.713 "product_name": "Malloc disk", 00:12:45.713 "block_size": 512, 00:12:45.713 "num_blocks": 65536, 00:12:45.713 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:45.713 "assigned_rate_limits": { 00:12:45.713 "rw_ios_per_sec": 0, 00:12:45.713 "rw_mbytes_per_sec": 0, 00:12:45.713 "r_mbytes_per_sec": 0, 00:12:45.713 "w_mbytes_per_sec": 0 00:12:45.713 }, 00:12:45.713 "claimed": false, 00:12:45.713 "zoned": false, 00:12:45.713 "supported_io_types": { 00:12:45.713 "read": true, 00:12:45.713 "write": true, 00:12:45.713 "unmap": true, 00:12:45.713 "write_zeroes": true, 00:12:45.713 "flush": true, 00:12:45.713 "reset": true, 00:12:45.713 "compare": false, 00:12:45.713 "compare_and_write": false, 00:12:45.713 "abort": true, 00:12:45.713 "nvme_admin": false, 00:12:45.713 "nvme_io": false 00:12:45.713 }, 00:12:45.713 "memory_domains": [ 00:12:45.713 { 00:12:45.713 "dma_device_id": "system", 00:12:45.713 "dma_device_type": 1 00:12:45.713 }, 00:12:45.713 { 00:12:45.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.713 "dma_device_type": 2 00:12:45.713 } 00:12:45.713 ], 00:12:45.713 "driver_specific": {} 00:12:45.713 } 00:12:45.713 ] 00:12:45.713 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:45.713 10:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:45.713 10:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:45.713 10:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:45.971 BaseBdev4 00:12:46.229 10:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:12:46.229 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:12:46.229 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:46.229 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:46.229 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:46.230 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:46.230 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.487 10:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.747 [ 00:12:46.747 { 00:12:46.747 "name": "BaseBdev4", 00:12:46.747 "aliases": [ 00:12:46.747 "8dea58d0-2712-11ef-b084-113036b5c18d" 00:12:46.747 ], 00:12:46.747 "product_name": "Malloc disk", 00:12:46.747 "block_size": 512, 00:12:46.747 "num_blocks": 65536, 00:12:46.747 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:46.747 "assigned_rate_limits": { 00:12:46.747 "rw_ios_per_sec": 0, 00:12:46.747 "rw_mbytes_per_sec": 0, 00:12:46.747 "r_mbytes_per_sec": 0, 00:12:46.747 "w_mbytes_per_sec": 0 00:12:46.747 }, 00:12:46.747 "claimed": false, 00:12:46.747 "zoned": false, 00:12:46.747 "supported_io_types": { 00:12:46.747 "read": true, 00:12:46.747 "write": true, 00:12:46.747 "unmap": true, 00:12:46.747 "write_zeroes": true, 00:12:46.747 "flush": true, 00:12:46.747 "reset": true, 00:12:46.747 "compare": false, 00:12:46.747 "compare_and_write": false, 00:12:46.747 "abort": true, 00:12:46.747 "nvme_admin": false, 00:12:46.747 "nvme_io": false 00:12:46.747 }, 00:12:46.747 "memory_domains": [ 00:12:46.747 { 00:12:46.747 "dma_device_id": "system", 00:12:46.747 "dma_device_type": 1 00:12:46.747 }, 00:12:46.747 { 00:12:46.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.747 "dma_device_type": 2 00:12:46.747 } 00:12:46.747 ], 00:12:46.747 "driver_specific": {} 00:12:46.747 } 00:12:46.747 ] 00:12:46.747 10:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:46.747 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:46.747 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:46.747 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:47.005 [2024-06-10 10:16:52.425525] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.005 [2024-06-10 10:16:52.425588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.005 [2024-06-10 10:16:52.425598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:47.005 [2024-06-10 10:16:52.426080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:47.005 [2024-06-10 10:16:52.426104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.005 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.264 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.264 "name": "Existed_Raid", 00:12:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.264 "strip_size_kb": 64, 00:12:47.264 "state": "configuring", 00:12:47.264 "raid_level": "raid0", 00:12:47.264 "superblock": false, 00:12:47.264 "num_base_bdevs": 4, 00:12:47.264 "num_base_bdevs_discovered": 3, 00:12:47.264 "num_base_bdevs_operational": 4, 00:12:47.264 "base_bdevs_list": [ 00:12:47.264 { 00:12:47.264 "name": "BaseBdev1", 00:12:47.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.264 "is_configured": false, 00:12:47.264 "data_offset": 0, 00:12:47.264 "data_size": 0 00:12:47.264 }, 00:12:47.264 { 00:12:47.264 "name": "BaseBdev2", 00:12:47.264 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:47.264 "is_configured": true, 00:12:47.264 "data_offset": 0, 00:12:47.264 "data_size": 65536 00:12:47.264 }, 00:12:47.264 { 00:12:47.264 "name": "BaseBdev3", 00:12:47.264 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:47.264 "is_configured": true, 00:12:47.264 "data_offset": 0, 00:12:47.264 "data_size": 65536 00:12:47.264 }, 00:12:47.264 { 00:12:47.264 "name": "BaseBdev4", 00:12:47.264 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:47.264 "is_configured": true, 00:12:47.264 "data_offset": 0, 00:12:47.264 "data_size": 65536 00:12:47.264 } 00:12:47.264 ] 00:12:47.264 }' 00:12:47.264 10:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.264 10:16:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:47.829 [2024-06-10 10:16:53.349547] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.829 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.087 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.087 "name": "Existed_Raid", 00:12:48.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.087 "strip_size_kb": 64, 00:12:48.087 "state": "configuring", 00:12:48.087 "raid_level": "raid0", 00:12:48.087 "superblock": false, 00:12:48.087 "num_base_bdevs": 4, 00:12:48.087 "num_base_bdevs_discovered": 2, 00:12:48.087 "num_base_bdevs_operational": 4, 00:12:48.087 "base_bdevs_list": [ 00:12:48.087 { 00:12:48.087 "name": "BaseBdev1", 00:12:48.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.087 "is_configured": false, 00:12:48.087 "data_offset": 0, 00:12:48.087 "data_size": 0 00:12:48.087 }, 00:12:48.087 { 00:12:48.087 "name": null, 00:12:48.087 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:48.087 "is_configured": false, 00:12:48.087 "data_offset": 0, 00:12:48.087 "data_size": 65536 00:12:48.087 }, 00:12:48.087 { 00:12:48.087 "name": "BaseBdev3", 00:12:48.087 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:48.087 "is_configured": true, 00:12:48.087 "data_offset": 0, 00:12:48.087 "data_size": 65536 00:12:48.087 }, 00:12:48.088 { 00:12:48.088 "name": "BaseBdev4", 00:12:48.088 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:48.088 "is_configured": true, 00:12:48.088 "data_offset": 0, 00:12:48.088 "data_size": 65536 00:12:48.088 } 00:12:48.088 ] 00:12:48.088 }' 00:12:48.088 10:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.088 10:16:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.653 10:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.653 10:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:48.910 10:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:48.910 10:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.168 [2024-06-10 10:16:54.553713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.168 BaseBdev1 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:49.168 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.426 10:16:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.684 [ 00:12:49.684 { 00:12:49.684 "name": "BaseBdev1", 00:12:49.684 "aliases": [ 00:12:49.684 "8fb2e82d-2712-11ef-b084-113036b5c18d" 00:12:49.684 ], 00:12:49.684 "product_name": "Malloc disk", 00:12:49.684 "block_size": 512, 00:12:49.684 "num_blocks": 65536, 00:12:49.684 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:49.684 "assigned_rate_limits": { 00:12:49.684 "rw_ios_per_sec": 0, 00:12:49.684 "rw_mbytes_per_sec": 0, 00:12:49.684 "r_mbytes_per_sec": 0, 00:12:49.684 "w_mbytes_per_sec": 0 00:12:49.684 }, 00:12:49.684 "claimed": true, 00:12:49.684 "claim_type": "exclusive_write", 00:12:49.684 "zoned": false, 00:12:49.684 "supported_io_types": { 00:12:49.684 "read": true, 00:12:49.684 "write": true, 00:12:49.684 "unmap": true, 00:12:49.684 "write_zeroes": true, 00:12:49.684 "flush": true, 00:12:49.684 "reset": true, 00:12:49.684 "compare": false, 00:12:49.684 "compare_and_write": false, 00:12:49.684 "abort": true, 00:12:49.684 "nvme_admin": false, 00:12:49.684 "nvme_io": false 00:12:49.684 }, 00:12:49.684 "memory_domains": [ 00:12:49.684 { 00:12:49.684 "dma_device_id": "system", 00:12:49.684 "dma_device_type": 1 00:12:49.684 }, 00:12:49.684 { 00:12:49.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.684 "dma_device_type": 2 00:12:49.684 } 00:12:49.684 ], 00:12:49.684 "driver_specific": {} 00:12:49.684 } 00:12:49.684 ] 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.684 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.943 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.943 "name": "Existed_Raid", 00:12:49.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.943 "strip_size_kb": 64, 00:12:49.943 "state": "configuring", 00:12:49.943 "raid_level": "raid0", 00:12:49.943 "superblock": false, 00:12:49.943 "num_base_bdevs": 4, 00:12:49.943 "num_base_bdevs_discovered": 3, 00:12:49.943 "num_base_bdevs_operational": 4, 00:12:49.943 "base_bdevs_list": [ 00:12:49.943 { 00:12:49.943 "name": "BaseBdev1", 00:12:49.943 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:49.943 "is_configured": true, 00:12:49.943 "data_offset": 0, 00:12:49.943 "data_size": 65536 00:12:49.943 }, 00:12:49.943 { 00:12:49.943 "name": null, 00:12:49.943 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:49.943 "is_configured": false, 00:12:49.943 "data_offset": 0, 00:12:49.943 "data_size": 65536 00:12:49.943 }, 00:12:49.943 { 00:12:49.943 "name": "BaseBdev3", 00:12:49.943 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:49.943 "is_configured": true, 00:12:49.943 "data_offset": 0, 00:12:49.943 "data_size": 65536 00:12:49.943 }, 00:12:49.943 { 00:12:49.943 "name": "BaseBdev4", 00:12:49.943 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:49.943 "is_configured": true, 00:12:49.943 "data_offset": 0, 00:12:49.943 "data_size": 65536 00:12:49.943 } 00:12:49.943 ] 00:12:49.943 }' 00:12:49.943 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.943 10:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.200 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.200 10:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.458 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:50.458 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:50.793 [2024-06-10 10:16:56.329670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.793 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.360 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.360 "name": "Existed_Raid", 00:12:51.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.360 "strip_size_kb": 64, 00:12:51.360 "state": "configuring", 00:12:51.360 "raid_level": "raid0", 00:12:51.360 "superblock": false, 00:12:51.360 "num_base_bdevs": 4, 00:12:51.360 "num_base_bdevs_discovered": 2, 00:12:51.360 "num_base_bdevs_operational": 4, 00:12:51.360 "base_bdevs_list": [ 00:12:51.360 { 00:12:51.360 "name": "BaseBdev1", 00:12:51.360 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:51.360 "is_configured": true, 00:12:51.360 "data_offset": 0, 00:12:51.360 "data_size": 65536 00:12:51.360 }, 00:12:51.360 { 00:12:51.360 "name": null, 00:12:51.360 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:51.360 "is_configured": false, 00:12:51.360 "data_offset": 0, 00:12:51.360 "data_size": 65536 00:12:51.360 }, 00:12:51.360 { 00:12:51.360 "name": null, 00:12:51.360 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:51.360 "is_configured": false, 00:12:51.360 "data_offset": 0, 00:12:51.360 "data_size": 65536 00:12:51.360 }, 00:12:51.360 { 00:12:51.360 "name": "BaseBdev4", 00:12:51.360 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:51.360 "is_configured": true, 00:12:51.360 "data_offset": 0, 00:12:51.360 "data_size": 65536 00:12:51.360 } 00:12:51.360 ] 00:12:51.360 }' 00:12:51.360 10:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.360 10:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.617 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.617 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.874 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:51.874 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:52.142 [2024-06-10 10:16:57.557710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.142 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.400 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.400 "name": "Existed_Raid", 00:12:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.400 "strip_size_kb": 64, 00:12:52.400 "state": "configuring", 00:12:52.400 "raid_level": "raid0", 00:12:52.400 "superblock": false, 00:12:52.400 "num_base_bdevs": 4, 00:12:52.400 "num_base_bdevs_discovered": 3, 00:12:52.400 "num_base_bdevs_operational": 4, 00:12:52.400 "base_bdevs_list": [ 00:12:52.400 { 00:12:52.400 "name": "BaseBdev1", 00:12:52.400 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:52.400 "is_configured": true, 00:12:52.400 "data_offset": 0, 00:12:52.400 "data_size": 65536 00:12:52.400 }, 00:12:52.400 { 00:12:52.400 "name": null, 00:12:52.400 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:52.400 "is_configured": false, 00:12:52.400 "data_offset": 0, 00:12:52.400 "data_size": 65536 00:12:52.400 }, 00:12:52.400 { 00:12:52.400 "name": "BaseBdev3", 00:12:52.400 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:52.400 "is_configured": true, 00:12:52.400 "data_offset": 0, 00:12:52.400 "data_size": 65536 00:12:52.400 }, 00:12:52.400 { 00:12:52.400 "name": "BaseBdev4", 00:12:52.400 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:52.400 "is_configured": true, 00:12:52.400 "data_offset": 0, 00:12:52.400 "data_size": 65536 00:12:52.400 } 00:12:52.400 ] 00:12:52.400 }' 00:12:52.400 10:16:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.400 10:16:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.966 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.966 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.966 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:53.225 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:53.225 [2024-06-10 10:16:58.813774] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.483 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.484 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.484 10:16:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.742 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.742 "name": "Existed_Raid", 00:12:53.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.742 "strip_size_kb": 64, 00:12:53.742 "state": "configuring", 00:12:53.742 "raid_level": "raid0", 00:12:53.742 "superblock": false, 00:12:53.742 "num_base_bdevs": 4, 00:12:53.742 "num_base_bdevs_discovered": 2, 00:12:53.742 "num_base_bdevs_operational": 4, 00:12:53.742 "base_bdevs_list": [ 00:12:53.742 { 00:12:53.742 "name": null, 00:12:53.742 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:53.742 "is_configured": false, 00:12:53.742 "data_offset": 0, 00:12:53.742 "data_size": 65536 00:12:53.742 }, 00:12:53.742 { 00:12:53.742 "name": null, 00:12:53.742 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:53.742 "is_configured": false, 00:12:53.742 "data_offset": 0, 00:12:53.742 "data_size": 65536 00:12:53.742 }, 00:12:53.742 { 00:12:53.742 "name": "BaseBdev3", 00:12:53.742 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:53.742 "is_configured": true, 00:12:53.742 "data_offset": 0, 00:12:53.742 "data_size": 65536 00:12:53.742 }, 00:12:53.742 { 00:12:53.742 "name": "BaseBdev4", 00:12:53.742 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:53.742 "is_configured": true, 00:12:53.742 "data_offset": 0, 00:12:53.742 "data_size": 65536 00:12:53.742 } 00:12:53.742 ] 00:12:53.742 }' 00:12:53.742 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.742 10:16:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.999 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.999 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.258 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:54.258 10:16:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:54.548 [2024-06-10 10:17:00.006710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.548 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.549 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.549 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.807 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.807 "name": "Existed_Raid", 00:12:54.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.807 "strip_size_kb": 64, 00:12:54.807 "state": "configuring", 00:12:54.807 "raid_level": "raid0", 00:12:54.807 "superblock": false, 00:12:54.807 "num_base_bdevs": 4, 00:12:54.807 "num_base_bdevs_discovered": 3, 00:12:54.807 "num_base_bdevs_operational": 4, 00:12:54.807 "base_bdevs_list": [ 00:12:54.807 { 00:12:54.807 "name": null, 00:12:54.807 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:54.807 "is_configured": false, 00:12:54.807 "data_offset": 0, 00:12:54.807 "data_size": 65536 00:12:54.807 }, 00:12:54.807 { 00:12:54.807 "name": "BaseBdev2", 00:12:54.807 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:54.807 "is_configured": true, 00:12:54.807 "data_offset": 0, 00:12:54.807 "data_size": 65536 00:12:54.807 }, 00:12:54.807 { 00:12:54.807 "name": "BaseBdev3", 00:12:54.807 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:54.807 "is_configured": true, 00:12:54.807 "data_offset": 0, 00:12:54.807 "data_size": 65536 00:12:54.807 }, 00:12:54.807 { 00:12:54.807 "name": "BaseBdev4", 00:12:54.807 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:54.807 "is_configured": true, 00:12:54.807 "data_offset": 0, 00:12:54.807 "data_size": 65536 00:12:54.807 } 00:12:54.807 ] 00:12:54.807 }' 00:12:54.807 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.807 10:17:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.064 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.064 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.631 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:55.631 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.631 10:17:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.889 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8fb2e82d-2712-11ef-b084-113036b5c18d 00:12:56.146 [2024-06-10 10:17:01.630895] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:56.146 [2024-06-10 10:17:01.630924] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d315f00 00:12:56.146 [2024-06-10 10:17:01.630928] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:56.146 [2024-06-10 10:17:01.630951] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d378e20 00:12:56.146 [2024-06-10 10:17:01.631011] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d315f00 00:12:56.146 [2024-06-10 10:17:01.631015] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d315f00 00:12:56.146 [2024-06-10 10:17:01.631047] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.146 NewBaseBdev 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:12:56.146 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.403 10:17:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:56.679 [ 00:12:56.679 { 00:12:56.679 "name": "NewBaseBdev", 00:12:56.679 "aliases": [ 00:12:56.679 "8fb2e82d-2712-11ef-b084-113036b5c18d" 00:12:56.679 ], 00:12:56.679 "product_name": "Malloc disk", 00:12:56.679 "block_size": 512, 00:12:56.679 "num_blocks": 65536, 00:12:56.679 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:56.679 "assigned_rate_limits": { 00:12:56.679 "rw_ios_per_sec": 0, 00:12:56.679 "rw_mbytes_per_sec": 0, 00:12:56.679 "r_mbytes_per_sec": 0, 00:12:56.679 "w_mbytes_per_sec": 0 00:12:56.679 }, 00:12:56.679 "claimed": true, 00:12:56.679 "claim_type": "exclusive_write", 00:12:56.679 "zoned": false, 00:12:56.679 "supported_io_types": { 00:12:56.679 "read": true, 00:12:56.679 "write": true, 00:12:56.679 "unmap": true, 00:12:56.679 "write_zeroes": true, 00:12:56.679 "flush": true, 00:12:56.679 "reset": true, 00:12:56.679 "compare": false, 00:12:56.679 "compare_and_write": false, 00:12:56.679 "abort": true, 00:12:56.679 "nvme_admin": false, 00:12:56.679 "nvme_io": false 00:12:56.679 }, 00:12:56.679 "memory_domains": [ 00:12:56.679 { 00:12:56.679 "dma_device_id": "system", 00:12:56.679 "dma_device_type": 1 00:12:56.679 }, 00:12:56.679 { 00:12:56.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.679 "dma_device_type": 2 00:12:56.679 } 00:12:56.679 ], 00:12:56.679 "driver_specific": {} 00:12:56.679 } 00:12:56.679 ] 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.679 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.947 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.947 "name": "Existed_Raid", 00:12:56.947 "uuid": "93ead268-2712-11ef-b084-113036b5c18d", 00:12:56.947 "strip_size_kb": 64, 00:12:56.947 "state": "online", 00:12:56.947 "raid_level": "raid0", 00:12:56.947 "superblock": false, 00:12:56.947 "num_base_bdevs": 4, 00:12:56.947 "num_base_bdevs_discovered": 4, 00:12:56.947 "num_base_bdevs_operational": 4, 00:12:56.947 "base_bdevs_list": [ 00:12:56.947 { 00:12:56.947 "name": "NewBaseBdev", 00:12:56.947 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:56.947 "is_configured": true, 00:12:56.947 "data_offset": 0, 00:12:56.947 "data_size": 65536 00:12:56.947 }, 00:12:56.947 { 00:12:56.947 "name": "BaseBdev2", 00:12:56.947 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:56.947 "is_configured": true, 00:12:56.947 "data_offset": 0, 00:12:56.947 "data_size": 65536 00:12:56.947 }, 00:12:56.947 { 00:12:56.947 "name": "BaseBdev3", 00:12:56.947 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:56.947 "is_configured": true, 00:12:56.947 "data_offset": 0, 00:12:56.947 "data_size": 65536 00:12:56.947 }, 00:12:56.947 { 00:12:56.947 "name": "BaseBdev4", 00:12:56.947 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:56.947 "is_configured": true, 00:12:56.947 "data_offset": 0, 00:12:56.947 "data_size": 65536 00:12:56.947 } 00:12:56.947 ] 00:12:56.947 }' 00:12:56.947 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.947 10:17:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:57.205 10:17:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:57.462 [2024-06-10 10:17:02.994861] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.462 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:57.462 "name": "Existed_Raid", 00:12:57.462 "aliases": [ 00:12:57.462 "93ead268-2712-11ef-b084-113036b5c18d" 00:12:57.462 ], 00:12:57.462 "product_name": "Raid Volume", 00:12:57.462 "block_size": 512, 00:12:57.462 "num_blocks": 262144, 00:12:57.462 "uuid": "93ead268-2712-11ef-b084-113036b5c18d", 00:12:57.462 "assigned_rate_limits": { 00:12:57.462 "rw_ios_per_sec": 0, 00:12:57.462 "rw_mbytes_per_sec": 0, 00:12:57.462 "r_mbytes_per_sec": 0, 00:12:57.462 "w_mbytes_per_sec": 0 00:12:57.462 }, 00:12:57.462 "claimed": false, 00:12:57.462 "zoned": false, 00:12:57.462 "supported_io_types": { 00:12:57.462 "read": true, 00:12:57.462 "write": true, 00:12:57.462 "unmap": true, 00:12:57.462 "write_zeroes": true, 00:12:57.462 "flush": true, 00:12:57.462 "reset": true, 00:12:57.462 "compare": false, 00:12:57.462 "compare_and_write": false, 00:12:57.462 "abort": false, 00:12:57.462 "nvme_admin": false, 00:12:57.462 "nvme_io": false 00:12:57.462 }, 00:12:57.462 "memory_domains": [ 00:12:57.462 { 00:12:57.462 "dma_device_id": "system", 00:12:57.462 "dma_device_type": 1 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.462 "dma_device_type": 2 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "system", 00:12:57.462 "dma_device_type": 1 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.462 "dma_device_type": 2 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "system", 00:12:57.462 "dma_device_type": 1 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.462 "dma_device_type": 2 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "system", 00:12:57.462 "dma_device_type": 1 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.462 "dma_device_type": 2 00:12:57.462 } 00:12:57.462 ], 00:12:57.462 "driver_specific": { 00:12:57.462 "raid": { 00:12:57.462 "uuid": "93ead268-2712-11ef-b084-113036b5c18d", 00:12:57.462 "strip_size_kb": 64, 00:12:57.462 "state": "online", 00:12:57.462 "raid_level": "raid0", 00:12:57.462 "superblock": false, 00:12:57.462 "num_base_bdevs": 4, 00:12:57.462 "num_base_bdevs_discovered": 4, 00:12:57.462 "num_base_bdevs_operational": 4, 00:12:57.462 "base_bdevs_list": [ 00:12:57.462 { 00:12:57.462 "name": "NewBaseBdev", 00:12:57.462 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:57.462 "is_configured": true, 00:12:57.462 "data_offset": 0, 00:12:57.462 "data_size": 65536 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "name": "BaseBdev2", 00:12:57.462 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:57.462 "is_configured": true, 00:12:57.462 "data_offset": 0, 00:12:57.462 "data_size": 65536 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "name": "BaseBdev3", 00:12:57.462 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:57.462 "is_configured": true, 00:12:57.462 "data_offset": 0, 00:12:57.462 "data_size": 65536 00:12:57.462 }, 00:12:57.462 { 00:12:57.462 "name": "BaseBdev4", 00:12:57.462 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:57.462 "is_configured": true, 00:12:57.462 "data_offset": 0, 00:12:57.462 "data_size": 65536 00:12:57.462 } 00:12:57.462 ] 00:12:57.462 } 00:12:57.462 } 00:12:57.462 }' 00:12:57.462 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.462 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:57.462 BaseBdev2 00:12:57.462 BaseBdev3 00:12:57.462 BaseBdev4' 00:12:57.463 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.463 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:57.463 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.029 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.029 "name": "NewBaseBdev", 00:12:58.029 "aliases": [ 00:12:58.029 "8fb2e82d-2712-11ef-b084-113036b5c18d" 00:12:58.029 ], 00:12:58.029 "product_name": "Malloc disk", 00:12:58.029 "block_size": 512, 00:12:58.029 "num_blocks": 65536, 00:12:58.029 "uuid": "8fb2e82d-2712-11ef-b084-113036b5c18d", 00:12:58.029 "assigned_rate_limits": { 00:12:58.029 "rw_ios_per_sec": 0, 00:12:58.029 "rw_mbytes_per_sec": 0, 00:12:58.029 "r_mbytes_per_sec": 0, 00:12:58.029 "w_mbytes_per_sec": 0 00:12:58.029 }, 00:12:58.029 "claimed": true, 00:12:58.029 "claim_type": "exclusive_write", 00:12:58.029 "zoned": false, 00:12:58.029 "supported_io_types": { 00:12:58.029 "read": true, 00:12:58.029 "write": true, 00:12:58.029 "unmap": true, 00:12:58.029 "write_zeroes": true, 00:12:58.029 "flush": true, 00:12:58.029 "reset": true, 00:12:58.029 "compare": false, 00:12:58.029 "compare_and_write": false, 00:12:58.029 "abort": true, 00:12:58.029 "nvme_admin": false, 00:12:58.029 "nvme_io": false 00:12:58.029 }, 00:12:58.029 "memory_domains": [ 00:12:58.029 { 00:12:58.029 "dma_device_id": "system", 00:12:58.029 "dma_device_type": 1 00:12:58.029 }, 00:12:58.029 { 00:12:58.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.030 "dma_device_type": 2 00:12:58.030 } 00:12:58.030 ], 00:12:58.030 "driver_specific": {} 00:12:58.030 }' 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.030 "name": "BaseBdev2", 00:12:58.030 "aliases": [ 00:12:58.030 "8ce171f9-2712-11ef-b084-113036b5c18d" 00:12:58.030 ], 00:12:58.030 "product_name": "Malloc disk", 00:12:58.030 "block_size": 512, 00:12:58.030 "num_blocks": 65536, 00:12:58.030 "uuid": "8ce171f9-2712-11ef-b084-113036b5c18d", 00:12:58.030 "assigned_rate_limits": { 00:12:58.030 "rw_ios_per_sec": 0, 00:12:58.030 "rw_mbytes_per_sec": 0, 00:12:58.030 "r_mbytes_per_sec": 0, 00:12:58.030 "w_mbytes_per_sec": 0 00:12:58.030 }, 00:12:58.030 "claimed": true, 00:12:58.030 "claim_type": "exclusive_write", 00:12:58.030 "zoned": false, 00:12:58.030 "supported_io_types": { 00:12:58.030 "read": true, 00:12:58.030 "write": true, 00:12:58.030 "unmap": true, 00:12:58.030 "write_zeroes": true, 00:12:58.030 "flush": true, 00:12:58.030 "reset": true, 00:12:58.030 "compare": false, 00:12:58.030 "compare_and_write": false, 00:12:58.030 "abort": true, 00:12:58.030 "nvme_admin": false, 00:12:58.030 "nvme_io": false 00:12:58.030 }, 00:12:58.030 "memory_domains": [ 00:12:58.030 { 00:12:58.030 "dma_device_id": "system", 00:12:58.030 "dma_device_type": 1 00:12:58.030 }, 00:12:58.030 { 00:12:58.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.030 "dma_device_type": 2 00:12:58.030 } 00:12:58.030 ], 00:12:58.030 "driver_specific": {} 00:12:58.030 }' 00:12:58.030 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.288 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:58.289 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.549 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.549 "name": "BaseBdev3", 00:12:58.549 "aliases": [ 00:12:58.549 "8d668219-2712-11ef-b084-113036b5c18d" 00:12:58.549 ], 00:12:58.549 "product_name": "Malloc disk", 00:12:58.549 "block_size": 512, 00:12:58.549 "num_blocks": 65536, 00:12:58.549 "uuid": "8d668219-2712-11ef-b084-113036b5c18d", 00:12:58.549 "assigned_rate_limits": { 00:12:58.549 "rw_ios_per_sec": 0, 00:12:58.549 "rw_mbytes_per_sec": 0, 00:12:58.549 "r_mbytes_per_sec": 0, 00:12:58.549 "w_mbytes_per_sec": 0 00:12:58.549 }, 00:12:58.549 "claimed": true, 00:12:58.549 "claim_type": "exclusive_write", 00:12:58.549 "zoned": false, 00:12:58.549 "supported_io_types": { 00:12:58.549 "read": true, 00:12:58.549 "write": true, 00:12:58.549 "unmap": true, 00:12:58.549 "write_zeroes": true, 00:12:58.549 "flush": true, 00:12:58.549 "reset": true, 00:12:58.549 "compare": false, 00:12:58.549 "compare_and_write": false, 00:12:58.549 "abort": true, 00:12:58.549 "nvme_admin": false, 00:12:58.549 "nvme_io": false 00:12:58.549 }, 00:12:58.549 "memory_domains": [ 00:12:58.549 { 00:12:58.549 "dma_device_id": "system", 00:12:58.549 "dma_device_type": 1 00:12:58.549 }, 00:12:58.549 { 00:12:58.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.549 "dma_device_type": 2 00:12:58.549 } 00:12:58.549 ], 00:12:58.549 "driver_specific": {} 00:12:58.549 }' 00:12:58.549 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.549 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.549 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.549 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.549 10:17:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:58.549 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.808 "name": "BaseBdev4", 00:12:58.808 "aliases": [ 00:12:58.808 "8dea58d0-2712-11ef-b084-113036b5c18d" 00:12:58.808 ], 00:12:58.808 "product_name": "Malloc disk", 00:12:58.808 "block_size": 512, 00:12:58.808 "num_blocks": 65536, 00:12:58.808 "uuid": "8dea58d0-2712-11ef-b084-113036b5c18d", 00:12:58.808 "assigned_rate_limits": { 00:12:58.808 "rw_ios_per_sec": 0, 00:12:58.808 "rw_mbytes_per_sec": 0, 00:12:58.808 "r_mbytes_per_sec": 0, 00:12:58.808 "w_mbytes_per_sec": 0 00:12:58.808 }, 00:12:58.808 "claimed": true, 00:12:58.808 "claim_type": "exclusive_write", 00:12:58.808 "zoned": false, 00:12:58.808 "supported_io_types": { 00:12:58.808 "read": true, 00:12:58.808 "write": true, 00:12:58.808 "unmap": true, 00:12:58.808 "write_zeroes": true, 00:12:58.808 "flush": true, 00:12:58.808 "reset": true, 00:12:58.808 "compare": false, 00:12:58.808 "compare_and_write": false, 00:12:58.808 "abort": true, 00:12:58.808 "nvme_admin": false, 00:12:58.808 "nvme_io": false 00:12:58.808 }, 00:12:58.808 "memory_domains": [ 00:12:58.808 { 00:12:58.808 "dma_device_id": "system", 00:12:58.808 "dma_device_type": 1 00:12:58.808 }, 00:12:58.808 { 00:12:58.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.808 "dma_device_type": 2 00:12:58.808 } 00:12:58.808 ], 00:12:58.808 "driver_specific": {} 00:12:58.808 }' 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.808 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:59.066 [2024-06-10 10:17:04.614894] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.066 [2024-06-10 10:17:04.614926] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.067 [2024-06-10 10:17:04.614947] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.067 [2024-06-10 10:17:04.614962] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.067 [2024-06-10 10:17:04.614966] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d315f00 name Existed_Raid, state offline 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 59149 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 59149 ']' 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 59149 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 59149 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:12:59.067 killing process with pid 59149 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59149' 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 59149 00:12:59.067 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 59149 00:12:59.067 [2024-06-10 10:17:04.646680] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.067 [2024-06-10 10:17:04.666085] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:59.324 ************************************ 00:12:59.324 END TEST raid_state_function_test 00:12:59.324 ************************************ 00:12:59.324 00:12:59.324 real 0m30.217s 00:12:59.324 user 0m55.859s 00:12:59.324 sys 0m3.731s 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.324 10:17:04 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:59.324 10:17:04 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:12:59.324 10:17:04 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:59.324 10:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.324 ************************************ 00:12:59.324 START TEST raid_state_function_test_sb 00:12:59.324 ************************************ 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid0 4 true 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59980 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59980' 00:12:59.324 Process raid pid: 59980 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59980 /var/tmp/spdk-raid.sock 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 59980 ']' 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:59.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:59.324 10:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.324 [2024-06-10 10:17:04.890128] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:12:59.324 [2024-06-10 10:17:04.890375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:59.888 EAL: TSC is not safe to use in SMP mode 00:12:59.888 EAL: TSC is not invariant 00:12:59.888 [2024-06-10 10:17:05.445548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.147 [2024-06-10 10:17:05.551689] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:00.147 [2024-06-10 10:17:05.554357] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.147 [2024-06-10 10:17:05.555395] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.147 [2024-06-10 10:17:05.555423] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.713 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:00.713 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:13:00.713 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:00.971 [2024-06-10 10:17:06.397749] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.971 [2024-06-10 10:17:06.397810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.971 [2024-06-10 10:17:06.397815] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.971 [2024-06-10 10:17:06.397823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.971 [2024-06-10 10:17:06.397827] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.971 [2024-06-10 10:17:06.397834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.971 [2024-06-10 10:17:06.397837] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:00.971 [2024-06-10 10:17:06.397844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.971 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.230 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.230 "name": "Existed_Raid", 00:13:01.230 "uuid": "96c22dcc-2712-11ef-b084-113036b5c18d", 00:13:01.230 "strip_size_kb": 64, 00:13:01.230 "state": "configuring", 00:13:01.230 "raid_level": "raid0", 00:13:01.230 "superblock": true, 00:13:01.230 "num_base_bdevs": 4, 00:13:01.230 "num_base_bdevs_discovered": 0, 00:13:01.230 "num_base_bdevs_operational": 4, 00:13:01.230 "base_bdevs_list": [ 00:13:01.230 { 00:13:01.230 "name": "BaseBdev1", 00:13:01.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.230 "is_configured": false, 00:13:01.230 "data_offset": 0, 00:13:01.230 "data_size": 0 00:13:01.230 }, 00:13:01.230 { 00:13:01.230 "name": "BaseBdev2", 00:13:01.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.230 "is_configured": false, 00:13:01.230 "data_offset": 0, 00:13:01.230 "data_size": 0 00:13:01.230 }, 00:13:01.230 { 00:13:01.230 "name": "BaseBdev3", 00:13:01.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.230 "is_configured": false, 00:13:01.230 "data_offset": 0, 00:13:01.230 "data_size": 0 00:13:01.230 }, 00:13:01.230 { 00:13:01.230 "name": "BaseBdev4", 00:13:01.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.230 "is_configured": false, 00:13:01.230 "data_offset": 0, 00:13:01.230 "data_size": 0 00:13:01.230 } 00:13:01.230 ] 00:13:01.230 }' 00:13:01.230 10:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.230 10:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.488 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:01.745 [2024-06-10 10:17:07.285748] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.745 [2024-06-10 10:17:07.285785] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c9f9500 name Existed_Raid, state configuring 00:13:01.745 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:02.007 [2024-06-10 10:17:07.513772] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.007 [2024-06-10 10:17:07.513826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.007 [2024-06-10 10:17:07.513831] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.007 [2024-06-10 10:17:07.513840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.007 [2024-06-10 10:17:07.513844] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:02.007 [2024-06-10 10:17:07.513851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:02.007 [2024-06-10 10:17:07.513854] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:02.007 [2024-06-10 10:17:07.513861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:02.007 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:02.294 [2024-06-10 10:17:07.742951] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.294 BaseBdev1 00:13:02.294 10:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:02.294 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:13:02.294 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:02.295 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:02.295 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:02.295 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:02.295 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.552 10:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:02.810 [ 00:13:02.810 { 00:13:02.810 "name": "BaseBdev1", 00:13:02.810 "aliases": [ 00:13:02.810 "978f4417-2712-11ef-b084-113036b5c18d" 00:13:02.810 ], 00:13:02.810 "product_name": "Malloc disk", 00:13:02.810 "block_size": 512, 00:13:02.810 "num_blocks": 65536, 00:13:02.810 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:02.810 "assigned_rate_limits": { 00:13:02.810 "rw_ios_per_sec": 0, 00:13:02.810 "rw_mbytes_per_sec": 0, 00:13:02.810 "r_mbytes_per_sec": 0, 00:13:02.810 "w_mbytes_per_sec": 0 00:13:02.810 }, 00:13:02.810 "claimed": true, 00:13:02.810 "claim_type": "exclusive_write", 00:13:02.810 "zoned": false, 00:13:02.810 "supported_io_types": { 00:13:02.810 "read": true, 00:13:02.810 "write": true, 00:13:02.810 "unmap": true, 00:13:02.810 "write_zeroes": true, 00:13:02.810 "flush": true, 00:13:02.810 "reset": true, 00:13:02.810 "compare": false, 00:13:02.810 "compare_and_write": false, 00:13:02.810 "abort": true, 00:13:02.810 "nvme_admin": false, 00:13:02.810 "nvme_io": false 00:13:02.810 }, 00:13:02.810 "memory_domains": [ 00:13:02.810 { 00:13:02.810 "dma_device_id": "system", 00:13:02.810 "dma_device_type": 1 00:13:02.810 }, 00:13:02.810 { 00:13:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.810 "dma_device_type": 2 00:13:02.810 } 00:13:02.810 ], 00:13:02.810 "driver_specific": {} 00:13:02.810 } 00:13:02.810 ] 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.810 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.069 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.069 "name": "Existed_Raid", 00:13:03.069 "uuid": "976c787d-2712-11ef-b084-113036b5c18d", 00:13:03.069 "strip_size_kb": 64, 00:13:03.069 "state": "configuring", 00:13:03.069 "raid_level": "raid0", 00:13:03.069 "superblock": true, 00:13:03.069 "num_base_bdevs": 4, 00:13:03.069 "num_base_bdevs_discovered": 1, 00:13:03.069 "num_base_bdevs_operational": 4, 00:13:03.069 "base_bdevs_list": [ 00:13:03.069 { 00:13:03.069 "name": "BaseBdev1", 00:13:03.069 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:03.069 "is_configured": true, 00:13:03.069 "data_offset": 2048, 00:13:03.069 "data_size": 63488 00:13:03.069 }, 00:13:03.069 { 00:13:03.069 "name": "BaseBdev2", 00:13:03.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.069 "is_configured": false, 00:13:03.069 "data_offset": 0, 00:13:03.069 "data_size": 0 00:13:03.069 }, 00:13:03.069 { 00:13:03.069 "name": "BaseBdev3", 00:13:03.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.069 "is_configured": false, 00:13:03.069 "data_offset": 0, 00:13:03.069 "data_size": 0 00:13:03.069 }, 00:13:03.069 { 00:13:03.069 "name": "BaseBdev4", 00:13:03.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.069 "is_configured": false, 00:13:03.069 "data_offset": 0, 00:13:03.069 "data_size": 0 00:13:03.069 } 00:13:03.069 ] 00:13:03.069 }' 00:13:03.069 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.069 10:17:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.329 10:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:03.587 [2024-06-10 10:17:09.101832] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.587 [2024-06-10 10:17:09.101877] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c9f9500 name Existed_Raid, state configuring 00:13:03.587 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:03.845 [2024-06-10 10:17:09.333846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.845 [2024-06-10 10:17:09.334570] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.845 [2024-06-10 10:17:09.334626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.845 [2024-06-10 10:17:09.334631] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:03.845 [2024-06-10 10:17:09.334640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:03.845 [2024-06-10 10:17:09.334643] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:03.845 [2024-06-10 10:17:09.334651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:03.845 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.846 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.104 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.104 "name": "Existed_Raid", 00:13:04.104 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:04.104 "strip_size_kb": 64, 00:13:04.104 "state": "configuring", 00:13:04.104 "raid_level": "raid0", 00:13:04.104 "superblock": true, 00:13:04.104 "num_base_bdevs": 4, 00:13:04.104 "num_base_bdevs_discovered": 1, 00:13:04.104 "num_base_bdevs_operational": 4, 00:13:04.104 "base_bdevs_list": [ 00:13:04.104 { 00:13:04.104 "name": "BaseBdev1", 00:13:04.104 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:04.104 "is_configured": true, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": "BaseBdev2", 00:13:04.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.104 "is_configured": false, 00:13:04.104 "data_offset": 0, 00:13:04.104 "data_size": 0 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": "BaseBdev3", 00:13:04.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.104 "is_configured": false, 00:13:04.104 "data_offset": 0, 00:13:04.104 "data_size": 0 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": "BaseBdev4", 00:13:04.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.104 "is_configured": false, 00:13:04.104 "data_offset": 0, 00:13:04.104 "data_size": 0 00:13:04.104 } 00:13:04.104 ] 00:13:04.104 }' 00:13:04.104 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.104 10:17:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.670 10:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.670 [2024-06-10 10:17:10.245996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.670 BaseBdev2 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:04.670 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:05.235 [ 00:13:05.235 { 00:13:05.235 "name": "BaseBdev2", 00:13:05.235 "aliases": [ 00:13:05.235 "990d5b78-2712-11ef-b084-113036b5c18d" 00:13:05.235 ], 00:13:05.235 "product_name": "Malloc disk", 00:13:05.235 "block_size": 512, 00:13:05.235 "num_blocks": 65536, 00:13:05.235 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:05.235 "assigned_rate_limits": { 00:13:05.235 "rw_ios_per_sec": 0, 00:13:05.235 "rw_mbytes_per_sec": 0, 00:13:05.235 "r_mbytes_per_sec": 0, 00:13:05.235 "w_mbytes_per_sec": 0 00:13:05.235 }, 00:13:05.235 "claimed": true, 00:13:05.235 "claim_type": "exclusive_write", 00:13:05.235 "zoned": false, 00:13:05.235 "supported_io_types": { 00:13:05.235 "read": true, 00:13:05.235 "write": true, 00:13:05.235 "unmap": true, 00:13:05.235 "write_zeroes": true, 00:13:05.235 "flush": true, 00:13:05.235 "reset": true, 00:13:05.235 "compare": false, 00:13:05.235 "compare_and_write": false, 00:13:05.235 "abort": true, 00:13:05.235 "nvme_admin": false, 00:13:05.235 "nvme_io": false 00:13:05.235 }, 00:13:05.235 "memory_domains": [ 00:13:05.235 { 00:13:05.235 "dma_device_id": "system", 00:13:05.235 "dma_device_type": 1 00:13:05.235 }, 00:13:05.235 { 00:13:05.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.235 "dma_device_type": 2 00:13:05.235 } 00:13:05.235 ], 00:13:05.235 "driver_specific": {} 00:13:05.235 } 00:13:05.235 ] 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.235 10:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.493 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.493 "name": "Existed_Raid", 00:13:05.493 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:05.493 "strip_size_kb": 64, 00:13:05.493 "state": "configuring", 00:13:05.493 "raid_level": "raid0", 00:13:05.493 "superblock": true, 00:13:05.493 "num_base_bdevs": 4, 00:13:05.493 "num_base_bdevs_discovered": 2, 00:13:05.493 "num_base_bdevs_operational": 4, 00:13:05.493 "base_bdevs_list": [ 00:13:05.493 { 00:13:05.494 "name": "BaseBdev1", 00:13:05.494 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:05.494 "is_configured": true, 00:13:05.494 "data_offset": 2048, 00:13:05.494 "data_size": 63488 00:13:05.494 }, 00:13:05.494 { 00:13:05.494 "name": "BaseBdev2", 00:13:05.494 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:05.494 "is_configured": true, 00:13:05.494 "data_offset": 2048, 00:13:05.494 "data_size": 63488 00:13:05.494 }, 00:13:05.494 { 00:13:05.494 "name": "BaseBdev3", 00:13:05.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.494 "is_configured": false, 00:13:05.494 "data_offset": 0, 00:13:05.494 "data_size": 0 00:13:05.494 }, 00:13:05.494 { 00:13:05.494 "name": "BaseBdev4", 00:13:05.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.494 "is_configured": false, 00:13:05.494 "data_offset": 0, 00:13:05.494 "data_size": 0 00:13:05.494 } 00:13:05.494 ] 00:13:05.494 }' 00:13:05.494 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.494 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.751 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:06.010 [2024-06-10 10:17:11.546043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.010 BaseBdev3 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:06.010 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:06.268 10:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:06.526 [ 00:13:06.526 { 00:13:06.526 "name": "BaseBdev3", 00:13:06.526 "aliases": [ 00:13:06.526 "99d3bb4a-2712-11ef-b084-113036b5c18d" 00:13:06.526 ], 00:13:06.526 "product_name": "Malloc disk", 00:13:06.526 "block_size": 512, 00:13:06.526 "num_blocks": 65536, 00:13:06.526 "uuid": "99d3bb4a-2712-11ef-b084-113036b5c18d", 00:13:06.526 "assigned_rate_limits": { 00:13:06.526 "rw_ios_per_sec": 0, 00:13:06.526 "rw_mbytes_per_sec": 0, 00:13:06.526 "r_mbytes_per_sec": 0, 00:13:06.526 "w_mbytes_per_sec": 0 00:13:06.526 }, 00:13:06.526 "claimed": true, 00:13:06.526 "claim_type": "exclusive_write", 00:13:06.526 "zoned": false, 00:13:06.526 "supported_io_types": { 00:13:06.526 "read": true, 00:13:06.526 "write": true, 00:13:06.526 "unmap": true, 00:13:06.526 "write_zeroes": true, 00:13:06.526 "flush": true, 00:13:06.526 "reset": true, 00:13:06.526 "compare": false, 00:13:06.526 "compare_and_write": false, 00:13:06.526 "abort": true, 00:13:06.526 "nvme_admin": false, 00:13:06.526 "nvme_io": false 00:13:06.526 }, 00:13:06.526 "memory_domains": [ 00:13:06.526 { 00:13:06.526 "dma_device_id": "system", 00:13:06.526 "dma_device_type": 1 00:13:06.526 }, 00:13:06.526 { 00:13:06.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.526 "dma_device_type": 2 00:13:06.526 } 00:13:06.526 ], 00:13:06.526 "driver_specific": {} 00:13:06.526 } 00:13:06.526 ] 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.526 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.784 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.784 "name": "Existed_Raid", 00:13:06.784 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:06.784 "strip_size_kb": 64, 00:13:06.784 "state": "configuring", 00:13:06.784 "raid_level": "raid0", 00:13:06.784 "superblock": true, 00:13:06.784 "num_base_bdevs": 4, 00:13:06.784 "num_base_bdevs_discovered": 3, 00:13:06.784 "num_base_bdevs_operational": 4, 00:13:06.784 "base_bdevs_list": [ 00:13:06.784 { 00:13:06.784 "name": "BaseBdev1", 00:13:06.784 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:06.784 "is_configured": true, 00:13:06.784 "data_offset": 2048, 00:13:06.784 "data_size": 63488 00:13:06.784 }, 00:13:06.784 { 00:13:06.784 "name": "BaseBdev2", 00:13:06.784 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:06.784 "is_configured": true, 00:13:06.784 "data_offset": 2048, 00:13:06.784 "data_size": 63488 00:13:06.784 }, 00:13:06.784 { 00:13:06.784 "name": "BaseBdev3", 00:13:06.784 "uuid": "99d3bb4a-2712-11ef-b084-113036b5c18d", 00:13:06.784 "is_configured": true, 00:13:06.784 "data_offset": 2048, 00:13:06.784 "data_size": 63488 00:13:06.784 }, 00:13:06.784 { 00:13:06.784 "name": "BaseBdev4", 00:13:06.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.784 "is_configured": false, 00:13:06.784 "data_offset": 0, 00:13:06.784 "data_size": 0 00:13:06.784 } 00:13:06.784 ] 00:13:06.784 }' 00:13:06.784 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.784 10:17:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.464 10:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:07.722 [2024-06-10 10:17:13.238176] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:07.722 [2024-06-10 10:17:13.238269] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c9f9a00 00:13:07.722 [2024-06-10 10:17:13.238282] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:07.722 [2024-06-10 10:17:13.238316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ca5cec0 00:13:07.722 [2024-06-10 10:17:13.238391] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c9f9a00 00:13:07.722 [2024-06-10 10:17:13.238400] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c9f9a00 00:13:07.722 [2024-06-10 10:17:13.238428] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.722 BaseBdev4 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:07.722 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.981 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:08.548 [ 00:13:08.548 { 00:13:08.548 "name": "BaseBdev4", 00:13:08.548 "aliases": [ 00:13:08.548 "9ad5ec8e-2712-11ef-b084-113036b5c18d" 00:13:08.548 ], 00:13:08.548 "product_name": "Malloc disk", 00:13:08.548 "block_size": 512, 00:13:08.548 "num_blocks": 65536, 00:13:08.548 "uuid": "9ad5ec8e-2712-11ef-b084-113036b5c18d", 00:13:08.548 "assigned_rate_limits": { 00:13:08.548 "rw_ios_per_sec": 0, 00:13:08.548 "rw_mbytes_per_sec": 0, 00:13:08.548 "r_mbytes_per_sec": 0, 00:13:08.548 "w_mbytes_per_sec": 0 00:13:08.548 }, 00:13:08.548 "claimed": true, 00:13:08.548 "claim_type": "exclusive_write", 00:13:08.548 "zoned": false, 00:13:08.548 "supported_io_types": { 00:13:08.548 "read": true, 00:13:08.548 "write": true, 00:13:08.548 "unmap": true, 00:13:08.548 "write_zeroes": true, 00:13:08.548 "flush": true, 00:13:08.548 "reset": true, 00:13:08.548 "compare": false, 00:13:08.548 "compare_and_write": false, 00:13:08.548 "abort": true, 00:13:08.548 "nvme_admin": false, 00:13:08.548 "nvme_io": false 00:13:08.548 }, 00:13:08.548 "memory_domains": [ 00:13:08.548 { 00:13:08.548 "dma_device_id": "system", 00:13:08.548 "dma_device_type": 1 00:13:08.548 }, 00:13:08.548 { 00:13:08.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.548 "dma_device_type": 2 00:13:08.548 } 00:13:08.548 ], 00:13:08.548 "driver_specific": {} 00:13:08.548 } 00:13:08.548 ] 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.548 10:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.807 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.807 "name": "Existed_Raid", 00:13:08.807 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:08.807 "strip_size_kb": 64, 00:13:08.807 "state": "online", 00:13:08.807 "raid_level": "raid0", 00:13:08.807 "superblock": true, 00:13:08.807 "num_base_bdevs": 4, 00:13:08.807 "num_base_bdevs_discovered": 4, 00:13:08.807 "num_base_bdevs_operational": 4, 00:13:08.807 "base_bdevs_list": [ 00:13:08.807 { 00:13:08.807 "name": "BaseBdev1", 00:13:08.807 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:08.807 "is_configured": true, 00:13:08.807 "data_offset": 2048, 00:13:08.807 "data_size": 63488 00:13:08.807 }, 00:13:08.807 { 00:13:08.807 "name": "BaseBdev2", 00:13:08.807 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:08.807 "is_configured": true, 00:13:08.807 "data_offset": 2048, 00:13:08.807 "data_size": 63488 00:13:08.808 }, 00:13:08.808 { 00:13:08.808 "name": "BaseBdev3", 00:13:08.808 "uuid": "99d3bb4a-2712-11ef-b084-113036b5c18d", 00:13:08.808 "is_configured": true, 00:13:08.808 "data_offset": 2048, 00:13:08.808 "data_size": 63488 00:13:08.808 }, 00:13:08.808 { 00:13:08.808 "name": "BaseBdev4", 00:13:08.808 "uuid": "9ad5ec8e-2712-11ef-b084-113036b5c18d", 00:13:08.808 "is_configured": true, 00:13:08.808 "data_offset": 2048, 00:13:08.808 "data_size": 63488 00:13:08.808 } 00:13:08.808 ] 00:13:08.808 }' 00:13:08.808 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.808 10:17:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:09.067 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:09.326 [2024-06-10 10:17:14.754104] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.326 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:09.326 "name": "Existed_Raid", 00:13:09.326 "aliases": [ 00:13:09.326 "988230e5-2712-11ef-b084-113036b5c18d" 00:13:09.326 ], 00:13:09.326 "product_name": "Raid Volume", 00:13:09.326 "block_size": 512, 00:13:09.326 "num_blocks": 253952, 00:13:09.326 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:09.326 "assigned_rate_limits": { 00:13:09.326 "rw_ios_per_sec": 0, 00:13:09.326 "rw_mbytes_per_sec": 0, 00:13:09.326 "r_mbytes_per_sec": 0, 00:13:09.326 "w_mbytes_per_sec": 0 00:13:09.326 }, 00:13:09.326 "claimed": false, 00:13:09.326 "zoned": false, 00:13:09.327 "supported_io_types": { 00:13:09.327 "read": true, 00:13:09.327 "write": true, 00:13:09.327 "unmap": true, 00:13:09.327 "write_zeroes": true, 00:13:09.327 "flush": true, 00:13:09.327 "reset": true, 00:13:09.327 "compare": false, 00:13:09.327 "compare_and_write": false, 00:13:09.327 "abort": false, 00:13:09.327 "nvme_admin": false, 00:13:09.327 "nvme_io": false 00:13:09.327 }, 00:13:09.327 "memory_domains": [ 00:13:09.327 { 00:13:09.327 "dma_device_id": "system", 00:13:09.327 "dma_device_type": 1 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.327 "dma_device_type": 2 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "system", 00:13:09.327 "dma_device_type": 1 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.327 "dma_device_type": 2 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "system", 00:13:09.327 "dma_device_type": 1 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.327 "dma_device_type": 2 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "system", 00:13:09.327 "dma_device_type": 1 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.327 "dma_device_type": 2 00:13:09.327 } 00:13:09.327 ], 00:13:09.327 "driver_specific": { 00:13:09.327 "raid": { 00:13:09.327 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:09.327 "strip_size_kb": 64, 00:13:09.327 "state": "online", 00:13:09.327 "raid_level": "raid0", 00:13:09.327 "superblock": true, 00:13:09.327 "num_base_bdevs": 4, 00:13:09.327 "num_base_bdevs_discovered": 4, 00:13:09.327 "num_base_bdevs_operational": 4, 00:13:09.327 "base_bdevs_list": [ 00:13:09.327 { 00:13:09.327 "name": "BaseBdev1", 00:13:09.327 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:09.327 "is_configured": true, 00:13:09.327 "data_offset": 2048, 00:13:09.327 "data_size": 63488 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "name": "BaseBdev2", 00:13:09.327 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:09.327 "is_configured": true, 00:13:09.327 "data_offset": 2048, 00:13:09.327 "data_size": 63488 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "name": "BaseBdev3", 00:13:09.327 "uuid": "99d3bb4a-2712-11ef-b084-113036b5c18d", 00:13:09.327 "is_configured": true, 00:13:09.327 "data_offset": 2048, 00:13:09.327 "data_size": 63488 00:13:09.327 }, 00:13:09.327 { 00:13:09.327 "name": "BaseBdev4", 00:13:09.327 "uuid": "9ad5ec8e-2712-11ef-b084-113036b5c18d", 00:13:09.327 "is_configured": true, 00:13:09.327 "data_offset": 2048, 00:13:09.327 "data_size": 63488 00:13:09.327 } 00:13:09.327 ] 00:13:09.327 } 00:13:09.327 } 00:13:09.327 }' 00:13:09.327 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.327 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:09.327 BaseBdev2 00:13:09.327 BaseBdev3 00:13:09.327 BaseBdev4' 00:13:09.327 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.327 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:09.327 10:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:09.586 "name": "BaseBdev1", 00:13:09.586 "aliases": [ 00:13:09.586 "978f4417-2712-11ef-b084-113036b5c18d" 00:13:09.586 ], 00:13:09.586 "product_name": "Malloc disk", 00:13:09.586 "block_size": 512, 00:13:09.586 "num_blocks": 65536, 00:13:09.586 "uuid": "978f4417-2712-11ef-b084-113036b5c18d", 00:13:09.586 "assigned_rate_limits": { 00:13:09.586 "rw_ios_per_sec": 0, 00:13:09.586 "rw_mbytes_per_sec": 0, 00:13:09.586 "r_mbytes_per_sec": 0, 00:13:09.586 "w_mbytes_per_sec": 0 00:13:09.586 }, 00:13:09.586 "claimed": true, 00:13:09.586 "claim_type": "exclusive_write", 00:13:09.586 "zoned": false, 00:13:09.586 "supported_io_types": { 00:13:09.586 "read": true, 00:13:09.586 "write": true, 00:13:09.586 "unmap": true, 00:13:09.586 "write_zeroes": true, 00:13:09.586 "flush": true, 00:13:09.586 "reset": true, 00:13:09.586 "compare": false, 00:13:09.586 "compare_and_write": false, 00:13:09.586 "abort": true, 00:13:09.586 "nvme_admin": false, 00:13:09.586 "nvme_io": false 00:13:09.586 }, 00:13:09.586 "memory_domains": [ 00:13:09.586 { 00:13:09.586 "dma_device_id": "system", 00:13:09.586 "dma_device_type": 1 00:13:09.586 }, 00:13:09.586 { 00:13:09.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.586 "dma_device_type": 2 00:13:09.586 } 00:13:09.586 ], 00:13:09.586 "driver_specific": {} 00:13:09.586 }' 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:09.586 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.154 "name": "BaseBdev2", 00:13:10.154 "aliases": [ 00:13:10.154 "990d5b78-2712-11ef-b084-113036b5c18d" 00:13:10.154 ], 00:13:10.154 "product_name": "Malloc disk", 00:13:10.154 "block_size": 512, 00:13:10.154 "num_blocks": 65536, 00:13:10.154 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:10.154 "assigned_rate_limits": { 00:13:10.154 "rw_ios_per_sec": 0, 00:13:10.154 "rw_mbytes_per_sec": 0, 00:13:10.154 "r_mbytes_per_sec": 0, 00:13:10.154 "w_mbytes_per_sec": 0 00:13:10.154 }, 00:13:10.154 "claimed": true, 00:13:10.154 "claim_type": "exclusive_write", 00:13:10.154 "zoned": false, 00:13:10.154 "supported_io_types": { 00:13:10.154 "read": true, 00:13:10.154 "write": true, 00:13:10.154 "unmap": true, 00:13:10.154 "write_zeroes": true, 00:13:10.154 "flush": true, 00:13:10.154 "reset": true, 00:13:10.154 "compare": false, 00:13:10.154 "compare_and_write": false, 00:13:10.154 "abort": true, 00:13:10.154 "nvme_admin": false, 00:13:10.154 "nvme_io": false 00:13:10.154 }, 00:13:10.154 "memory_domains": [ 00:13:10.154 { 00:13:10.154 "dma_device_id": "system", 00:13:10.154 "dma_device_type": 1 00:13:10.154 }, 00:13:10.154 { 00:13:10.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.154 "dma_device_type": 2 00:13:10.154 } 00:13:10.154 ], 00:13:10.154 "driver_specific": {} 00:13:10.154 }' 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.154 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.155 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:10.155 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.414 "name": "BaseBdev3", 00:13:10.414 "aliases": [ 00:13:10.414 "99d3bb4a-2712-11ef-b084-113036b5c18d" 00:13:10.414 ], 00:13:10.414 "product_name": "Malloc disk", 00:13:10.414 "block_size": 512, 00:13:10.414 "num_blocks": 65536, 00:13:10.414 "uuid": "99d3bb4a-2712-11ef-b084-113036b5c18d", 00:13:10.414 "assigned_rate_limits": { 00:13:10.414 "rw_ios_per_sec": 0, 00:13:10.414 "rw_mbytes_per_sec": 0, 00:13:10.414 "r_mbytes_per_sec": 0, 00:13:10.414 "w_mbytes_per_sec": 0 00:13:10.414 }, 00:13:10.414 "claimed": true, 00:13:10.414 "claim_type": "exclusive_write", 00:13:10.414 "zoned": false, 00:13:10.414 "supported_io_types": { 00:13:10.414 "read": true, 00:13:10.414 "write": true, 00:13:10.414 "unmap": true, 00:13:10.414 "write_zeroes": true, 00:13:10.414 "flush": true, 00:13:10.414 "reset": true, 00:13:10.414 "compare": false, 00:13:10.414 "compare_and_write": false, 00:13:10.414 "abort": true, 00:13:10.414 "nvme_admin": false, 00:13:10.414 "nvme_io": false 00:13:10.414 }, 00:13:10.414 "memory_domains": [ 00:13:10.414 { 00:13:10.414 "dma_device_id": "system", 00:13:10.414 "dma_device_type": 1 00:13:10.414 }, 00:13:10.414 { 00:13:10.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.414 "dma_device_type": 2 00:13:10.414 } 00:13:10.414 ], 00:13:10.414 "driver_specific": {} 00:13:10.414 }' 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.414 10:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.414 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:10.414 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.674 "name": "BaseBdev4", 00:13:10.674 "aliases": [ 00:13:10.674 "9ad5ec8e-2712-11ef-b084-113036b5c18d" 00:13:10.674 ], 00:13:10.674 "product_name": "Malloc disk", 00:13:10.674 "block_size": 512, 00:13:10.674 "num_blocks": 65536, 00:13:10.674 "uuid": "9ad5ec8e-2712-11ef-b084-113036b5c18d", 00:13:10.674 "assigned_rate_limits": { 00:13:10.674 "rw_ios_per_sec": 0, 00:13:10.674 "rw_mbytes_per_sec": 0, 00:13:10.674 "r_mbytes_per_sec": 0, 00:13:10.674 "w_mbytes_per_sec": 0 00:13:10.674 }, 00:13:10.674 "claimed": true, 00:13:10.674 "claim_type": "exclusive_write", 00:13:10.674 "zoned": false, 00:13:10.674 "supported_io_types": { 00:13:10.674 "read": true, 00:13:10.674 "write": true, 00:13:10.674 "unmap": true, 00:13:10.674 "write_zeroes": true, 00:13:10.674 "flush": true, 00:13:10.674 "reset": true, 00:13:10.674 "compare": false, 00:13:10.674 "compare_and_write": false, 00:13:10.674 "abort": true, 00:13:10.674 "nvme_admin": false, 00:13:10.674 "nvme_io": false 00:13:10.674 }, 00:13:10.674 "memory_domains": [ 00:13:10.674 { 00:13:10.674 "dma_device_id": "system", 00:13:10.674 "dma_device_type": 1 00:13:10.674 }, 00:13:10.674 { 00:13:10.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.674 "dma_device_type": 2 00:13:10.674 } 00:13:10.674 ], 00:13:10.674 "driver_specific": {} 00:13:10.674 }' 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.674 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.932 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.932 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.932 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.932 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.932 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.932 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:11.190 [2024-06-10 10:17:16.602174] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.190 [2024-06-10 10:17:16.602204] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.190 [2024-06-10 10:17:16.602219] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.190 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.448 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:11.448 "name": "Existed_Raid", 00:13:11.448 "uuid": "988230e5-2712-11ef-b084-113036b5c18d", 00:13:11.448 "strip_size_kb": 64, 00:13:11.448 "state": "offline", 00:13:11.448 "raid_level": "raid0", 00:13:11.448 "superblock": true, 00:13:11.448 "num_base_bdevs": 4, 00:13:11.448 "num_base_bdevs_discovered": 3, 00:13:11.448 "num_base_bdevs_operational": 3, 00:13:11.448 "base_bdevs_list": [ 00:13:11.448 { 00:13:11.448 "name": null, 00:13:11.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.448 "is_configured": false, 00:13:11.448 "data_offset": 2048, 00:13:11.448 "data_size": 63488 00:13:11.448 }, 00:13:11.448 { 00:13:11.448 "name": "BaseBdev2", 00:13:11.448 "uuid": "990d5b78-2712-11ef-b084-113036b5c18d", 00:13:11.448 "is_configured": true, 00:13:11.448 "data_offset": 2048, 00:13:11.448 "data_size": 63488 00:13:11.448 }, 00:13:11.448 { 00:13:11.448 "name": "BaseBdev3", 00:13:11.448 "uuid": "99d3bb4a-2712-11ef-b084-113036b5c18d", 00:13:11.448 "is_configured": true, 00:13:11.448 "data_offset": 2048, 00:13:11.448 "data_size": 63488 00:13:11.448 }, 00:13:11.448 { 00:13:11.448 "name": "BaseBdev4", 00:13:11.448 "uuid": "9ad5ec8e-2712-11ef-b084-113036b5c18d", 00:13:11.448 "is_configured": true, 00:13:11.448 "data_offset": 2048, 00:13:11.448 "data_size": 63488 00:13:11.448 } 00:13:11.448 ] 00:13:11.448 }' 00:13:11.448 10:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:11.448 10:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.706 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:11.706 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:11.706 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.706 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:12.009 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:12.009 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.009 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:12.268 [2024-06-10 10:17:17.623065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.268 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:12.268 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:12.268 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.268 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:12.527 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:12.527 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:12.527 10:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:12.527 [2024-06-10 10:17:18.083872] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:12.527 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:12.527 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:12.527 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:12.527 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:13.094 [2024-06-10 10:17:18.632669] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:13.094 [2024-06-10 10:17:18.632723] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c9f9a00 name Existed_Raid, state offline 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.094 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:13.662 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:13.662 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:13.662 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:13.662 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:13.662 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:13.662 10:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:13.662 BaseBdev2 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:13.662 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:13.919 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.176 [ 00:13:14.176 { 00:13:14.176 "name": "BaseBdev2", 00:13:14.177 "aliases": [ 00:13:14.177 "9e617934-2712-11ef-b084-113036b5c18d" 00:13:14.177 ], 00:13:14.177 "product_name": "Malloc disk", 00:13:14.177 "block_size": 512, 00:13:14.177 "num_blocks": 65536, 00:13:14.177 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:14.177 "assigned_rate_limits": { 00:13:14.177 "rw_ios_per_sec": 0, 00:13:14.177 "rw_mbytes_per_sec": 0, 00:13:14.177 "r_mbytes_per_sec": 0, 00:13:14.177 "w_mbytes_per_sec": 0 00:13:14.177 }, 00:13:14.177 "claimed": false, 00:13:14.177 "zoned": false, 00:13:14.177 "supported_io_types": { 00:13:14.177 "read": true, 00:13:14.177 "write": true, 00:13:14.177 "unmap": true, 00:13:14.177 "write_zeroes": true, 00:13:14.177 "flush": true, 00:13:14.177 "reset": true, 00:13:14.177 "compare": false, 00:13:14.177 "compare_and_write": false, 00:13:14.177 "abort": true, 00:13:14.177 "nvme_admin": false, 00:13:14.177 "nvme_io": false 00:13:14.177 }, 00:13:14.177 "memory_domains": [ 00:13:14.177 { 00:13:14.177 "dma_device_id": "system", 00:13:14.177 "dma_device_type": 1 00:13:14.177 }, 00:13:14.177 { 00:13:14.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.177 "dma_device_type": 2 00:13:14.177 } 00:13:14.177 ], 00:13:14.177 "driver_specific": {} 00:13:14.177 } 00:13:14.177 ] 00:13:14.177 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:14.177 10:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:14.177 10:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:14.177 10:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.434 BaseBdev3 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:14.434 10:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:14.692 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.950 [ 00:13:14.950 { 00:13:14.950 "name": "BaseBdev3", 00:13:14.950 "aliases": [ 00:13:14.950 "9eceba43-2712-11ef-b084-113036b5c18d" 00:13:14.950 ], 00:13:14.950 "product_name": "Malloc disk", 00:13:14.950 "block_size": 512, 00:13:14.950 "num_blocks": 65536, 00:13:14.950 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:14.950 "assigned_rate_limits": { 00:13:14.950 "rw_ios_per_sec": 0, 00:13:14.950 "rw_mbytes_per_sec": 0, 00:13:14.950 "r_mbytes_per_sec": 0, 00:13:14.950 "w_mbytes_per_sec": 0 00:13:14.950 }, 00:13:14.950 "claimed": false, 00:13:14.950 "zoned": false, 00:13:14.950 "supported_io_types": { 00:13:14.950 "read": true, 00:13:14.950 "write": true, 00:13:14.950 "unmap": true, 00:13:14.950 "write_zeroes": true, 00:13:14.950 "flush": true, 00:13:14.950 "reset": true, 00:13:14.950 "compare": false, 00:13:14.950 "compare_and_write": false, 00:13:14.950 "abort": true, 00:13:14.950 "nvme_admin": false, 00:13:14.950 "nvme_io": false 00:13:14.950 }, 00:13:14.950 "memory_domains": [ 00:13:14.950 { 00:13:14.950 "dma_device_id": "system", 00:13:14.950 "dma_device_type": 1 00:13:14.950 }, 00:13:14.950 { 00:13:14.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.950 "dma_device_type": 2 00:13:14.950 } 00:13:14.950 ], 00:13:14.950 "driver_specific": {} 00:13:14.950 } 00:13:14.950 ] 00:13:14.950 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:14.950 10:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:14.950 10:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:14.950 10:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:15.207 BaseBdev4 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:15.207 10:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:15.466 10:17:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:15.725 [ 00:13:15.725 { 00:13:15.725 "name": "BaseBdev4", 00:13:15.725 "aliases": [ 00:13:15.725 "9f4488a0-2712-11ef-b084-113036b5c18d" 00:13:15.725 ], 00:13:15.725 "product_name": "Malloc disk", 00:13:15.725 "block_size": 512, 00:13:15.725 "num_blocks": 65536, 00:13:15.725 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:15.725 "assigned_rate_limits": { 00:13:15.725 "rw_ios_per_sec": 0, 00:13:15.725 "rw_mbytes_per_sec": 0, 00:13:15.725 "r_mbytes_per_sec": 0, 00:13:15.725 "w_mbytes_per_sec": 0 00:13:15.725 }, 00:13:15.725 "claimed": false, 00:13:15.725 "zoned": false, 00:13:15.725 "supported_io_types": { 00:13:15.725 "read": true, 00:13:15.725 "write": true, 00:13:15.725 "unmap": true, 00:13:15.725 "write_zeroes": true, 00:13:15.725 "flush": true, 00:13:15.725 "reset": true, 00:13:15.725 "compare": false, 00:13:15.725 "compare_and_write": false, 00:13:15.725 "abort": true, 00:13:15.725 "nvme_admin": false, 00:13:15.725 "nvme_io": false 00:13:15.725 }, 00:13:15.725 "memory_domains": [ 00:13:15.725 { 00:13:15.725 "dma_device_id": "system", 00:13:15.725 "dma_device_type": 1 00:13:15.725 }, 00:13:15.725 { 00:13:15.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.725 "dma_device_type": 2 00:13:15.725 } 00:13:15.725 ], 00:13:15.725 "driver_specific": {} 00:13:15.725 } 00:13:15.725 ] 00:13:15.983 10:17:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:15.983 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:15.983 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:15.983 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:16.241 [2024-06-10 10:17:21.617834] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.241 [2024-06-10 10:17:21.617890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.241 [2024-06-10 10:17:21.617900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.241 [2024-06-10 10:17:21.618395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.241 [2024-06-10 10:17:21.618415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.241 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.499 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.499 "name": "Existed_Raid", 00:13:16.499 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:16.499 "strip_size_kb": 64, 00:13:16.499 "state": "configuring", 00:13:16.499 "raid_level": "raid0", 00:13:16.499 "superblock": true, 00:13:16.499 "num_base_bdevs": 4, 00:13:16.499 "num_base_bdevs_discovered": 3, 00:13:16.499 "num_base_bdevs_operational": 4, 00:13:16.499 "base_bdevs_list": [ 00:13:16.499 { 00:13:16.499 "name": "BaseBdev1", 00:13:16.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.499 "is_configured": false, 00:13:16.499 "data_offset": 0, 00:13:16.499 "data_size": 0 00:13:16.499 }, 00:13:16.499 { 00:13:16.499 "name": "BaseBdev2", 00:13:16.499 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:16.499 "is_configured": true, 00:13:16.499 "data_offset": 2048, 00:13:16.499 "data_size": 63488 00:13:16.499 }, 00:13:16.499 { 00:13:16.499 "name": "BaseBdev3", 00:13:16.499 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:16.499 "is_configured": true, 00:13:16.499 "data_offset": 2048, 00:13:16.499 "data_size": 63488 00:13:16.499 }, 00:13:16.499 { 00:13:16.499 "name": "BaseBdev4", 00:13:16.499 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:16.499 "is_configured": true, 00:13:16.499 "data_offset": 2048, 00:13:16.499 "data_size": 63488 00:13:16.499 } 00:13:16.499 ] 00:13:16.499 }' 00:13:16.499 10:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.499 10:17:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.756 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:17.057 [2024-06-10 10:17:22.533868] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.057 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.319 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.319 "name": "Existed_Raid", 00:13:17.319 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:17.319 "strip_size_kb": 64, 00:13:17.319 "state": "configuring", 00:13:17.319 "raid_level": "raid0", 00:13:17.319 "superblock": true, 00:13:17.319 "num_base_bdevs": 4, 00:13:17.319 "num_base_bdevs_discovered": 2, 00:13:17.319 "num_base_bdevs_operational": 4, 00:13:17.319 "base_bdevs_list": [ 00:13:17.319 { 00:13:17.319 "name": "BaseBdev1", 00:13:17.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.319 "is_configured": false, 00:13:17.319 "data_offset": 0, 00:13:17.319 "data_size": 0 00:13:17.319 }, 00:13:17.319 { 00:13:17.319 "name": null, 00:13:17.319 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:17.319 "is_configured": false, 00:13:17.319 "data_offset": 2048, 00:13:17.319 "data_size": 63488 00:13:17.319 }, 00:13:17.319 { 00:13:17.319 "name": "BaseBdev3", 00:13:17.319 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:17.319 "is_configured": true, 00:13:17.319 "data_offset": 2048, 00:13:17.319 "data_size": 63488 00:13:17.319 }, 00:13:17.319 { 00:13:17.319 "name": "BaseBdev4", 00:13:17.319 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:17.319 "is_configured": true, 00:13:17.319 "data_offset": 2048, 00:13:17.319 "data_size": 63488 00:13:17.319 } 00:13:17.319 ] 00:13:17.319 }' 00:13:17.319 10:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.319 10:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.577 10:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.578 10:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.838 10:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:17.838 10:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:18.097 [2024-06-10 10:17:23.634042] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.097 BaseBdev1 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:18.097 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.355 10:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:18.614 [ 00:13:18.614 { 00:13:18.614 "name": "BaseBdev1", 00:13:18.614 "aliases": [ 00:13:18.614 "a10836a8-2712-11ef-b084-113036b5c18d" 00:13:18.614 ], 00:13:18.614 "product_name": "Malloc disk", 00:13:18.614 "block_size": 512, 00:13:18.614 "num_blocks": 65536, 00:13:18.614 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:18.614 "assigned_rate_limits": { 00:13:18.614 "rw_ios_per_sec": 0, 00:13:18.614 "rw_mbytes_per_sec": 0, 00:13:18.614 "r_mbytes_per_sec": 0, 00:13:18.614 "w_mbytes_per_sec": 0 00:13:18.614 }, 00:13:18.614 "claimed": true, 00:13:18.614 "claim_type": "exclusive_write", 00:13:18.614 "zoned": false, 00:13:18.614 "supported_io_types": { 00:13:18.614 "read": true, 00:13:18.614 "write": true, 00:13:18.614 "unmap": true, 00:13:18.614 "write_zeroes": true, 00:13:18.614 "flush": true, 00:13:18.614 "reset": true, 00:13:18.614 "compare": false, 00:13:18.614 "compare_and_write": false, 00:13:18.614 "abort": true, 00:13:18.614 "nvme_admin": false, 00:13:18.614 "nvme_io": false 00:13:18.614 }, 00:13:18.614 "memory_domains": [ 00:13:18.614 { 00:13:18.614 "dma_device_id": "system", 00:13:18.614 "dma_device_type": 1 00:13:18.614 }, 00:13:18.614 { 00:13:18.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.614 "dma_device_type": 2 00:13:18.614 } 00:13:18.614 ], 00:13:18.614 "driver_specific": {} 00:13:18.614 } 00:13:18.614 ] 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.614 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.179 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.179 "name": "Existed_Raid", 00:13:19.179 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:19.179 "strip_size_kb": 64, 00:13:19.179 "state": "configuring", 00:13:19.179 "raid_level": "raid0", 00:13:19.179 "superblock": true, 00:13:19.179 "num_base_bdevs": 4, 00:13:19.179 "num_base_bdevs_discovered": 3, 00:13:19.179 "num_base_bdevs_operational": 4, 00:13:19.179 "base_bdevs_list": [ 00:13:19.179 { 00:13:19.179 "name": "BaseBdev1", 00:13:19.179 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:19.179 "is_configured": true, 00:13:19.179 "data_offset": 2048, 00:13:19.179 "data_size": 63488 00:13:19.179 }, 00:13:19.179 { 00:13:19.179 "name": null, 00:13:19.179 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:19.179 "is_configured": false, 00:13:19.179 "data_offset": 2048, 00:13:19.179 "data_size": 63488 00:13:19.179 }, 00:13:19.179 { 00:13:19.180 "name": "BaseBdev3", 00:13:19.180 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:19.180 "is_configured": true, 00:13:19.180 "data_offset": 2048, 00:13:19.180 "data_size": 63488 00:13:19.180 }, 00:13:19.180 { 00:13:19.180 "name": "BaseBdev4", 00:13:19.180 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:19.180 "is_configured": true, 00:13:19.180 "data_offset": 2048, 00:13:19.180 "data_size": 63488 00:13:19.180 } 00:13:19.180 ] 00:13:19.180 }' 00:13:19.180 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.180 10:17:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.500 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.500 10:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:19.774 [2024-06-10 10:17:25.345997] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.774 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.033 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.033 "name": "Existed_Raid", 00:13:20.033 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:20.033 "strip_size_kb": 64, 00:13:20.033 "state": "configuring", 00:13:20.033 "raid_level": "raid0", 00:13:20.033 "superblock": true, 00:13:20.033 "num_base_bdevs": 4, 00:13:20.033 "num_base_bdevs_discovered": 2, 00:13:20.033 "num_base_bdevs_operational": 4, 00:13:20.033 "base_bdevs_list": [ 00:13:20.033 { 00:13:20.033 "name": "BaseBdev1", 00:13:20.033 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:20.033 "is_configured": true, 00:13:20.033 "data_offset": 2048, 00:13:20.033 "data_size": 63488 00:13:20.033 }, 00:13:20.033 { 00:13:20.033 "name": null, 00:13:20.033 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:20.033 "is_configured": false, 00:13:20.033 "data_offset": 2048, 00:13:20.033 "data_size": 63488 00:13:20.033 }, 00:13:20.033 { 00:13:20.033 "name": null, 00:13:20.033 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:20.033 "is_configured": false, 00:13:20.033 "data_offset": 2048, 00:13:20.033 "data_size": 63488 00:13:20.033 }, 00:13:20.033 { 00:13:20.033 "name": "BaseBdev4", 00:13:20.033 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:20.033 "is_configured": true, 00:13:20.033 "data_offset": 2048, 00:13:20.033 "data_size": 63488 00:13:20.033 } 00:13:20.033 ] 00:13:20.033 }' 00:13:20.033 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.033 10:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.599 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.599 10:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.858 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:20.858 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:21.116 [2024-06-10 10:17:26.526057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.116 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.375 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.375 "name": "Existed_Raid", 00:13:21.375 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:21.376 "strip_size_kb": 64, 00:13:21.376 "state": "configuring", 00:13:21.376 "raid_level": "raid0", 00:13:21.376 "superblock": true, 00:13:21.376 "num_base_bdevs": 4, 00:13:21.376 "num_base_bdevs_discovered": 3, 00:13:21.376 "num_base_bdevs_operational": 4, 00:13:21.376 "base_bdevs_list": [ 00:13:21.376 { 00:13:21.376 "name": "BaseBdev1", 00:13:21.376 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:21.376 "is_configured": true, 00:13:21.376 "data_offset": 2048, 00:13:21.376 "data_size": 63488 00:13:21.376 }, 00:13:21.376 { 00:13:21.376 "name": null, 00:13:21.376 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:21.376 "is_configured": false, 00:13:21.376 "data_offset": 2048, 00:13:21.376 "data_size": 63488 00:13:21.376 }, 00:13:21.376 { 00:13:21.376 "name": "BaseBdev3", 00:13:21.376 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:21.376 "is_configured": true, 00:13:21.376 "data_offset": 2048, 00:13:21.376 "data_size": 63488 00:13:21.376 }, 00:13:21.376 { 00:13:21.376 "name": "BaseBdev4", 00:13:21.376 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:21.376 "is_configured": true, 00:13:21.376 "data_offset": 2048, 00:13:21.376 "data_size": 63488 00:13:21.376 } 00:13:21.376 ] 00:13:21.376 }' 00:13:21.376 10:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.376 10:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.634 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.634 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:21.892 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:21.892 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:22.150 [2024-06-10 10:17:27.710108] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.150 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.409 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.409 "name": "Existed_Raid", 00:13:22.409 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:22.409 "strip_size_kb": 64, 00:13:22.409 "state": "configuring", 00:13:22.409 "raid_level": "raid0", 00:13:22.409 "superblock": true, 00:13:22.409 "num_base_bdevs": 4, 00:13:22.409 "num_base_bdevs_discovered": 2, 00:13:22.409 "num_base_bdevs_operational": 4, 00:13:22.409 "base_bdevs_list": [ 00:13:22.409 { 00:13:22.409 "name": null, 00:13:22.409 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:22.409 "is_configured": false, 00:13:22.409 "data_offset": 2048, 00:13:22.409 "data_size": 63488 00:13:22.409 }, 00:13:22.409 { 00:13:22.409 "name": null, 00:13:22.409 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:22.409 "is_configured": false, 00:13:22.409 "data_offset": 2048, 00:13:22.409 "data_size": 63488 00:13:22.409 }, 00:13:22.409 { 00:13:22.409 "name": "BaseBdev3", 00:13:22.409 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:22.409 "is_configured": true, 00:13:22.409 "data_offset": 2048, 00:13:22.409 "data_size": 63488 00:13:22.409 }, 00:13:22.409 { 00:13:22.409 "name": "BaseBdev4", 00:13:22.409 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:22.409 "is_configured": true, 00:13:22.409 "data_offset": 2048, 00:13:22.409 "data_size": 63488 00:13:22.409 } 00:13:22.409 ] 00:13:22.409 }' 00:13:22.409 10:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.409 10:17:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.975 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.975 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:23.234 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:23.234 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:23.493 [2024-06-10 10:17:28.895234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.493 10:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.751 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.751 "name": "Existed_Raid", 00:13:23.751 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:23.751 "strip_size_kb": 64, 00:13:23.751 "state": "configuring", 00:13:23.751 "raid_level": "raid0", 00:13:23.751 "superblock": true, 00:13:23.751 "num_base_bdevs": 4, 00:13:23.751 "num_base_bdevs_discovered": 3, 00:13:23.751 "num_base_bdevs_operational": 4, 00:13:23.751 "base_bdevs_list": [ 00:13:23.751 { 00:13:23.751 "name": null, 00:13:23.751 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:23.751 "is_configured": false, 00:13:23.751 "data_offset": 2048, 00:13:23.751 "data_size": 63488 00:13:23.751 }, 00:13:23.751 { 00:13:23.751 "name": "BaseBdev2", 00:13:23.751 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:23.751 "is_configured": true, 00:13:23.751 "data_offset": 2048, 00:13:23.751 "data_size": 63488 00:13:23.751 }, 00:13:23.751 { 00:13:23.751 "name": "BaseBdev3", 00:13:23.751 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:23.751 "is_configured": true, 00:13:23.751 "data_offset": 2048, 00:13:23.751 "data_size": 63488 00:13:23.751 }, 00:13:23.751 { 00:13:23.751 "name": "BaseBdev4", 00:13:23.751 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:23.751 "is_configured": true, 00:13:23.751 "data_offset": 2048, 00:13:23.751 "data_size": 63488 00:13:23.751 } 00:13:23.751 ] 00:13:23.751 }' 00:13:23.751 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.751 10:17:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.009 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.009 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:24.577 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:24.577 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:24.577 10:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.834 10:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a10836a8-2712-11ef-b084-113036b5c18d 00:13:25.092 [2024-06-10 10:17:30.567439] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:25.092 [2024-06-10 10:17:30.567502] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c9f9f00 00:13:25.092 [2024-06-10 10:17:30.567507] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:25.092 [2024-06-10 10:17:30.567528] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ca5ce20 00:13:25.092 [2024-06-10 10:17:30.567564] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c9f9f00 00:13:25.092 [2024-06-10 10:17:30.567568] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c9f9f00 00:13:25.092 [2024-06-10 10:17:30.567586] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.092 NewBaseBdev 00:13:25.092 10:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:25.092 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:13:25.092 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:13:25.092 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:13:25.092 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:13:25.093 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:13:25.093 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.352 10:17:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:25.612 [ 00:13:25.612 { 00:13:25.612 "name": "NewBaseBdev", 00:13:25.612 "aliases": [ 00:13:25.612 "a10836a8-2712-11ef-b084-113036b5c18d" 00:13:25.612 ], 00:13:25.612 "product_name": "Malloc disk", 00:13:25.612 "block_size": 512, 00:13:25.612 "num_blocks": 65536, 00:13:25.612 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:25.612 "assigned_rate_limits": { 00:13:25.612 "rw_ios_per_sec": 0, 00:13:25.612 "rw_mbytes_per_sec": 0, 00:13:25.612 "r_mbytes_per_sec": 0, 00:13:25.612 "w_mbytes_per_sec": 0 00:13:25.612 }, 00:13:25.612 "claimed": true, 00:13:25.612 "claim_type": "exclusive_write", 00:13:25.612 "zoned": false, 00:13:25.612 "supported_io_types": { 00:13:25.612 "read": true, 00:13:25.612 "write": true, 00:13:25.612 "unmap": true, 00:13:25.612 "write_zeroes": true, 00:13:25.612 "flush": true, 00:13:25.612 "reset": true, 00:13:25.612 "compare": false, 00:13:25.612 "compare_and_write": false, 00:13:25.612 "abort": true, 00:13:25.612 "nvme_admin": false, 00:13:25.612 "nvme_io": false 00:13:25.612 }, 00:13:25.612 "memory_domains": [ 00:13:25.612 { 00:13:25.612 "dma_device_id": "system", 00:13:25.612 "dma_device_type": 1 00:13:25.612 }, 00:13:25.612 { 00:13:25.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.612 "dma_device_type": 2 00:13:25.612 } 00:13:25.612 ], 00:13:25.612 "driver_specific": {} 00:13:25.612 } 00:13:25.612 ] 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.612 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.178 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:26.178 "name": "Existed_Raid", 00:13:26.178 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:26.178 "strip_size_kb": 64, 00:13:26.178 "state": "online", 00:13:26.178 "raid_level": "raid0", 00:13:26.178 "superblock": true, 00:13:26.178 "num_base_bdevs": 4, 00:13:26.178 "num_base_bdevs_discovered": 4, 00:13:26.178 "num_base_bdevs_operational": 4, 00:13:26.178 "base_bdevs_list": [ 00:13:26.178 { 00:13:26.178 "name": "NewBaseBdev", 00:13:26.178 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:26.178 "is_configured": true, 00:13:26.178 "data_offset": 2048, 00:13:26.178 "data_size": 63488 00:13:26.178 }, 00:13:26.178 { 00:13:26.178 "name": "BaseBdev2", 00:13:26.178 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:26.178 "is_configured": true, 00:13:26.178 "data_offset": 2048, 00:13:26.178 "data_size": 63488 00:13:26.178 }, 00:13:26.178 { 00:13:26.178 "name": "BaseBdev3", 00:13:26.178 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:26.178 "is_configured": true, 00:13:26.178 "data_offset": 2048, 00:13:26.178 "data_size": 63488 00:13:26.178 }, 00:13:26.178 { 00:13:26.178 "name": "BaseBdev4", 00:13:26.178 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:26.178 "is_configured": true, 00:13:26.178 "data_offset": 2048, 00:13:26.178 "data_size": 63488 00:13:26.178 } 00:13:26.178 ] 00:13:26.178 }' 00:13:26.178 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:26.178 10:17:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:26.436 10:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:26.693 [2024-06-10 10:17:32.267510] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.693 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:26.693 "name": "Existed_Raid", 00:13:26.693 "aliases": [ 00:13:26.693 "9fd49453-2712-11ef-b084-113036b5c18d" 00:13:26.693 ], 00:13:26.693 "product_name": "Raid Volume", 00:13:26.693 "block_size": 512, 00:13:26.693 "num_blocks": 253952, 00:13:26.693 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:26.693 "assigned_rate_limits": { 00:13:26.693 "rw_ios_per_sec": 0, 00:13:26.693 "rw_mbytes_per_sec": 0, 00:13:26.693 "r_mbytes_per_sec": 0, 00:13:26.693 "w_mbytes_per_sec": 0 00:13:26.693 }, 00:13:26.693 "claimed": false, 00:13:26.693 "zoned": false, 00:13:26.693 "supported_io_types": { 00:13:26.693 "read": true, 00:13:26.693 "write": true, 00:13:26.693 "unmap": true, 00:13:26.693 "write_zeroes": true, 00:13:26.693 "flush": true, 00:13:26.693 "reset": true, 00:13:26.693 "compare": false, 00:13:26.693 "compare_and_write": false, 00:13:26.693 "abort": false, 00:13:26.693 "nvme_admin": false, 00:13:26.693 "nvme_io": false 00:13:26.693 }, 00:13:26.693 "memory_domains": [ 00:13:26.693 { 00:13:26.693 "dma_device_id": "system", 00:13:26.693 "dma_device_type": 1 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.693 "dma_device_type": 2 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "system", 00:13:26.693 "dma_device_type": 1 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.693 "dma_device_type": 2 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "system", 00:13:26.693 "dma_device_type": 1 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.693 "dma_device_type": 2 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "system", 00:13:26.693 "dma_device_type": 1 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.693 "dma_device_type": 2 00:13:26.693 } 00:13:26.693 ], 00:13:26.693 "driver_specific": { 00:13:26.693 "raid": { 00:13:26.693 "uuid": "9fd49453-2712-11ef-b084-113036b5c18d", 00:13:26.693 "strip_size_kb": 64, 00:13:26.693 "state": "online", 00:13:26.693 "raid_level": "raid0", 00:13:26.693 "superblock": true, 00:13:26.693 "num_base_bdevs": 4, 00:13:26.693 "num_base_bdevs_discovered": 4, 00:13:26.693 "num_base_bdevs_operational": 4, 00:13:26.693 "base_bdevs_list": [ 00:13:26.693 { 00:13:26.693 "name": "NewBaseBdev", 00:13:26.693 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "name": "BaseBdev2", 00:13:26.693 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "name": "BaseBdev3", 00:13:26.693 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "name": "BaseBdev4", 00:13:26.693 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 } 00:13:26.693 ] 00:13:26.693 } 00:13:26.693 } 00:13:26.693 }' 00:13:26.693 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.973 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:26.973 BaseBdev2 00:13:26.973 BaseBdev3 00:13:26.973 BaseBdev4' 00:13:26.973 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:26.973 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:26.973 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.244 "name": "NewBaseBdev", 00:13:27.244 "aliases": [ 00:13:27.244 "a10836a8-2712-11ef-b084-113036b5c18d" 00:13:27.244 ], 00:13:27.244 "product_name": "Malloc disk", 00:13:27.244 "block_size": 512, 00:13:27.244 "num_blocks": 65536, 00:13:27.244 "uuid": "a10836a8-2712-11ef-b084-113036b5c18d", 00:13:27.244 "assigned_rate_limits": { 00:13:27.244 "rw_ios_per_sec": 0, 00:13:27.244 "rw_mbytes_per_sec": 0, 00:13:27.244 "r_mbytes_per_sec": 0, 00:13:27.244 "w_mbytes_per_sec": 0 00:13:27.244 }, 00:13:27.244 "claimed": true, 00:13:27.244 "claim_type": "exclusive_write", 00:13:27.244 "zoned": false, 00:13:27.244 "supported_io_types": { 00:13:27.244 "read": true, 00:13:27.244 "write": true, 00:13:27.244 "unmap": true, 00:13:27.244 "write_zeroes": true, 00:13:27.244 "flush": true, 00:13:27.244 "reset": true, 00:13:27.244 "compare": false, 00:13:27.244 "compare_and_write": false, 00:13:27.244 "abort": true, 00:13:27.244 "nvme_admin": false, 00:13:27.244 "nvme_io": false 00:13:27.244 }, 00:13:27.244 "memory_domains": [ 00:13:27.244 { 00:13:27.244 "dma_device_id": "system", 00:13:27.244 "dma_device_type": 1 00:13:27.244 }, 00:13:27.244 { 00:13:27.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.244 "dma_device_type": 2 00:13:27.244 } 00:13:27.244 ], 00:13:27.244 "driver_specific": {} 00:13:27.244 }' 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:27.244 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.502 "name": "BaseBdev2", 00:13:27.502 "aliases": [ 00:13:27.502 "9e617934-2712-11ef-b084-113036b5c18d" 00:13:27.502 ], 00:13:27.502 "product_name": "Malloc disk", 00:13:27.502 "block_size": 512, 00:13:27.502 "num_blocks": 65536, 00:13:27.502 "uuid": "9e617934-2712-11ef-b084-113036b5c18d", 00:13:27.502 "assigned_rate_limits": { 00:13:27.502 "rw_ios_per_sec": 0, 00:13:27.502 "rw_mbytes_per_sec": 0, 00:13:27.502 "r_mbytes_per_sec": 0, 00:13:27.502 "w_mbytes_per_sec": 0 00:13:27.502 }, 00:13:27.502 "claimed": true, 00:13:27.502 "claim_type": "exclusive_write", 00:13:27.502 "zoned": false, 00:13:27.502 "supported_io_types": { 00:13:27.502 "read": true, 00:13:27.502 "write": true, 00:13:27.502 "unmap": true, 00:13:27.502 "write_zeroes": true, 00:13:27.502 "flush": true, 00:13:27.502 "reset": true, 00:13:27.502 "compare": false, 00:13:27.502 "compare_and_write": false, 00:13:27.502 "abort": true, 00:13:27.502 "nvme_admin": false, 00:13:27.502 "nvme_io": false 00:13:27.502 }, 00:13:27.502 "memory_domains": [ 00:13:27.502 { 00:13:27.502 "dma_device_id": "system", 00:13:27.502 "dma_device_type": 1 00:13:27.502 }, 00:13:27.502 { 00:13:27.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.502 "dma_device_type": 2 00:13:27.502 } 00:13:27.502 ], 00:13:27.502 "driver_specific": {} 00:13:27.502 }' 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:27.502 10:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:27.760 "name": "BaseBdev3", 00:13:27.760 "aliases": [ 00:13:27.760 "9eceba43-2712-11ef-b084-113036b5c18d" 00:13:27.760 ], 00:13:27.760 "product_name": "Malloc disk", 00:13:27.760 "block_size": 512, 00:13:27.760 "num_blocks": 65536, 00:13:27.760 "uuid": "9eceba43-2712-11ef-b084-113036b5c18d", 00:13:27.760 "assigned_rate_limits": { 00:13:27.760 "rw_ios_per_sec": 0, 00:13:27.760 "rw_mbytes_per_sec": 0, 00:13:27.760 "r_mbytes_per_sec": 0, 00:13:27.760 "w_mbytes_per_sec": 0 00:13:27.760 }, 00:13:27.760 "claimed": true, 00:13:27.760 "claim_type": "exclusive_write", 00:13:27.760 "zoned": false, 00:13:27.760 "supported_io_types": { 00:13:27.760 "read": true, 00:13:27.760 "write": true, 00:13:27.760 "unmap": true, 00:13:27.760 "write_zeroes": true, 00:13:27.760 "flush": true, 00:13:27.760 "reset": true, 00:13:27.760 "compare": false, 00:13:27.760 "compare_and_write": false, 00:13:27.760 "abort": true, 00:13:27.760 "nvme_admin": false, 00:13:27.760 "nvme_io": false 00:13:27.760 }, 00:13:27.760 "memory_domains": [ 00:13:27.760 { 00:13:27.760 "dma_device_id": "system", 00:13:27.760 "dma_device_type": 1 00:13:27.760 }, 00:13:27.760 { 00:13:27.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.760 "dma_device_type": 2 00:13:27.760 } 00:13:27.760 ], 00:13:27.760 "driver_specific": {} 00:13:27.760 }' 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:27.760 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:28.018 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:28.018 "name": "BaseBdev4", 00:13:28.018 "aliases": [ 00:13:28.018 "9f4488a0-2712-11ef-b084-113036b5c18d" 00:13:28.018 ], 00:13:28.018 "product_name": "Malloc disk", 00:13:28.018 "block_size": 512, 00:13:28.018 "num_blocks": 65536, 00:13:28.018 "uuid": "9f4488a0-2712-11ef-b084-113036b5c18d", 00:13:28.018 "assigned_rate_limits": { 00:13:28.018 "rw_ios_per_sec": 0, 00:13:28.018 "rw_mbytes_per_sec": 0, 00:13:28.018 "r_mbytes_per_sec": 0, 00:13:28.018 "w_mbytes_per_sec": 0 00:13:28.018 }, 00:13:28.018 "claimed": true, 00:13:28.018 "claim_type": "exclusive_write", 00:13:28.018 "zoned": false, 00:13:28.018 "supported_io_types": { 00:13:28.018 "read": true, 00:13:28.018 "write": true, 00:13:28.018 "unmap": true, 00:13:28.018 "write_zeroes": true, 00:13:28.018 "flush": true, 00:13:28.018 "reset": true, 00:13:28.018 "compare": false, 00:13:28.018 "compare_and_write": false, 00:13:28.018 "abort": true, 00:13:28.018 "nvme_admin": false, 00:13:28.018 "nvme_io": false 00:13:28.018 }, 00:13:28.018 "memory_domains": [ 00:13:28.018 { 00:13:28.018 "dma_device_id": "system", 00:13:28.018 "dma_device_type": 1 00:13:28.018 }, 00:13:28.018 { 00:13:28.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.018 "dma_device_type": 2 00:13:28.018 } 00:13:28.018 ], 00:13:28.018 "driver_specific": {} 00:13:28.018 }' 00:13:28.018 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.018 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:28.276 10:17:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:28.534 [2024-06-10 10:17:34.011487] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:28.534 [2024-06-10 10:17:34.011518] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.534 [2024-06-10 10:17:34.011543] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.534 [2024-06-10 10:17:34.011559] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.534 [2024-06-10 10:17:34.011564] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c9f9f00 name Existed_Raid, state offline 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59980 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 59980 ']' 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 59980 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 59980 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:13:28.534 killing process with pid 59980 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 59980' 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 59980 00:13:28.534 [2024-06-10 10:17:34.048108] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.534 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 59980 00:13:28.534 [2024-06-10 10:17:34.068462] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.791 10:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:28.791 00:13:28.791 real 0m29.372s 00:13:28.791 user 0m54.146s 00:13:28.791 sys 0m3.750s 00:13:28.791 ************************************ 00:13:28.791 END TEST raid_state_function_test_sb 00:13:28.791 ************************************ 00:13:28.791 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:28.792 10:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.792 10:17:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:28.792 10:17:34 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:13:28.792 10:17:34 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:28.792 10:17:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.792 ************************************ 00:13:28.792 START TEST raid_superblock_test 00:13:28.792 ************************************ 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid0 4 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=60806 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 60806 /var/tmp/spdk-raid.sock 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 60806 ']' 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:28.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:28.792 10:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.792 [2024-06-10 10:17:34.298539] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:28.792 [2024-06-10 10:17:34.298767] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:29.356 EAL: TSC is not safe to use in SMP mode 00:13:29.356 EAL: TSC is not invariant 00:13:29.356 [2024-06-10 10:17:34.782864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.356 [2024-06-10 10:17:34.867426] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:29.356 [2024-06-10 10:17:34.869617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.356 [2024-06-10 10:17:34.870316] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.356 [2024-06-10 10:17:34.870327] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:29.921 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:30.181 malloc1 00:13:30.181 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:30.181 [2024-06-10 10:17:35.781258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:30.181 [2024-06-10 10:17:35.781313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.181 [2024-06-10 10:17:35.781323] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb4780 00:13:30.181 [2024-06-10 10:17:35.781331] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.181 [2024-06-10 10:17:35.782063] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.181 [2024-06-10 10:17:35.782094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:30.439 pt1 00:13:30.439 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:30.439 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:30.439 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:30.440 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:30.440 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:30.440 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.440 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.440 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.440 10:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:30.696 malloc2 00:13:30.696 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:30.953 [2024-06-10 10:17:36.329284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:30.953 [2024-06-10 10:17:36.329340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.953 [2024-06-10 10:17:36.329351] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb4c80 00:13:30.953 [2024-06-10 10:17:36.329358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.953 [2024-06-10 10:17:36.329998] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.953 [2024-06-10 10:17:36.330058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:30.953 pt2 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:30.953 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:31.210 malloc3 00:13:31.210 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:31.468 [2024-06-10 10:17:36.881298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:31.468 [2024-06-10 10:17:36.881356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.468 [2024-06-10 10:17:36.881367] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb5180 00:13:31.468 [2024-06-10 10:17:36.881385] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.468 [2024-06-10 10:17:36.881892] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.468 [2024-06-10 10:17:36.881917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:31.468 pt3 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:31.468 10:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:31.725 malloc4 00:13:31.725 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:31.982 [2024-06-10 10:17:37.337326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:31.982 [2024-06-10 10:17:37.337398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.982 [2024-06-10 10:17:37.337411] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb5680 00:13:31.982 [2024-06-10 10:17:37.337419] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.982 [2024-06-10 10:17:37.337952] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.982 [2024-06-10 10:17:37.337986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:31.982 pt4 00:13:31.982 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:31.982 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:31.982 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:31.982 [2024-06-10 10:17:37.581346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:31.982 [2024-06-10 10:17:37.581857] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:31.982 [2024-06-10 10:17:37.581880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:31.982 [2024-06-10 10:17:37.581891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:31.982 [2024-06-10 10:17:37.581938] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abb5900 00:13:31.982 [2024-06-10 10:17:37.581944] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:31.982 [2024-06-10 10:17:37.581976] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac17e20 00:13:31.982 [2024-06-10 10:17:37.582036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abb5900 00:13:31.982 [2024-06-10 10:17:37.582040] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abb5900 00:13:31.982 [2024-06-10 10:17:37.582064] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.239 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:32.239 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:32.239 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.240 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.498 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.498 "name": "raid_bdev1", 00:13:32.498 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:32.498 "strip_size_kb": 64, 00:13:32.498 "state": "online", 00:13:32.498 "raid_level": "raid0", 00:13:32.498 "superblock": true, 00:13:32.498 "num_base_bdevs": 4, 00:13:32.498 "num_base_bdevs_discovered": 4, 00:13:32.498 "num_base_bdevs_operational": 4, 00:13:32.498 "base_bdevs_list": [ 00:13:32.498 { 00:13:32.498 "name": "pt1", 00:13:32.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:32.498 "is_configured": true, 00:13:32.498 "data_offset": 2048, 00:13:32.498 "data_size": 63488 00:13:32.498 }, 00:13:32.498 { 00:13:32.498 "name": "pt2", 00:13:32.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:32.498 "is_configured": true, 00:13:32.498 "data_offset": 2048, 00:13:32.498 "data_size": 63488 00:13:32.498 }, 00:13:32.498 { 00:13:32.498 "name": "pt3", 00:13:32.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:32.498 "is_configured": true, 00:13:32.498 "data_offset": 2048, 00:13:32.498 "data_size": 63488 00:13:32.498 }, 00:13:32.498 { 00:13:32.498 "name": "pt4", 00:13:32.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:32.498 "is_configured": true, 00:13:32.498 "data_offset": 2048, 00:13:32.498 "data_size": 63488 00:13:32.498 } 00:13:32.498 ] 00:13:32.498 }' 00:13:32.498 10:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.498 10:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:32.756 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:33.015 [2024-06-10 10:17:38.481411] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.015 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:33.015 "name": "raid_bdev1", 00:13:33.015 "aliases": [ 00:13:33.015 "a9586acf-2712-11ef-b084-113036b5c18d" 00:13:33.015 ], 00:13:33.015 "product_name": "Raid Volume", 00:13:33.015 "block_size": 512, 00:13:33.015 "num_blocks": 253952, 00:13:33.015 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:33.015 "assigned_rate_limits": { 00:13:33.015 "rw_ios_per_sec": 0, 00:13:33.015 "rw_mbytes_per_sec": 0, 00:13:33.015 "r_mbytes_per_sec": 0, 00:13:33.015 "w_mbytes_per_sec": 0 00:13:33.015 }, 00:13:33.015 "claimed": false, 00:13:33.015 "zoned": false, 00:13:33.015 "supported_io_types": { 00:13:33.015 "read": true, 00:13:33.015 "write": true, 00:13:33.015 "unmap": true, 00:13:33.015 "write_zeroes": true, 00:13:33.015 "flush": true, 00:13:33.015 "reset": true, 00:13:33.015 "compare": false, 00:13:33.015 "compare_and_write": false, 00:13:33.015 "abort": false, 00:13:33.015 "nvme_admin": false, 00:13:33.015 "nvme_io": false 00:13:33.015 }, 00:13:33.015 "memory_domains": [ 00:13:33.015 { 00:13:33.015 "dma_device_id": "system", 00:13:33.015 "dma_device_type": 1 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.015 "dma_device_type": 2 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "system", 00:13:33.015 "dma_device_type": 1 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.015 "dma_device_type": 2 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "system", 00:13:33.015 "dma_device_type": 1 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.015 "dma_device_type": 2 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "system", 00:13:33.015 "dma_device_type": 1 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.015 "dma_device_type": 2 00:13:33.015 } 00:13:33.015 ], 00:13:33.015 "driver_specific": { 00:13:33.015 "raid": { 00:13:33.015 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:33.015 "strip_size_kb": 64, 00:13:33.015 "state": "online", 00:13:33.015 "raid_level": "raid0", 00:13:33.015 "superblock": true, 00:13:33.015 "num_base_bdevs": 4, 00:13:33.015 "num_base_bdevs_discovered": 4, 00:13:33.015 "num_base_bdevs_operational": 4, 00:13:33.015 "base_bdevs_list": [ 00:13:33.015 { 00:13:33.015 "name": "pt1", 00:13:33.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.015 "is_configured": true, 00:13:33.015 "data_offset": 2048, 00:13:33.015 "data_size": 63488 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "name": "pt2", 00:13:33.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.015 "is_configured": true, 00:13:33.015 "data_offset": 2048, 00:13:33.015 "data_size": 63488 00:13:33.015 }, 00:13:33.015 { 00:13:33.015 "name": "pt3", 00:13:33.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.015 "is_configured": true, 00:13:33.016 "data_offset": 2048, 00:13:33.016 "data_size": 63488 00:13:33.016 }, 00:13:33.016 { 00:13:33.016 "name": "pt4", 00:13:33.016 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:33.016 "is_configured": true, 00:13:33.016 "data_offset": 2048, 00:13:33.016 "data_size": 63488 00:13:33.016 } 00:13:33.016 ] 00:13:33.016 } 00:13:33.016 } 00:13:33.016 }' 00:13:33.016 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.016 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:33.016 pt2 00:13:33.016 pt3 00:13:33.016 pt4' 00:13:33.016 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:33.016 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:33.016 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:33.275 "name": "pt1", 00:13:33.275 "aliases": [ 00:13:33.275 "00000000-0000-0000-0000-000000000001" 00:13:33.275 ], 00:13:33.275 "product_name": "passthru", 00:13:33.275 "block_size": 512, 00:13:33.275 "num_blocks": 65536, 00:13:33.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.275 "assigned_rate_limits": { 00:13:33.275 "rw_ios_per_sec": 0, 00:13:33.275 "rw_mbytes_per_sec": 0, 00:13:33.275 "r_mbytes_per_sec": 0, 00:13:33.275 "w_mbytes_per_sec": 0 00:13:33.275 }, 00:13:33.275 "claimed": true, 00:13:33.275 "claim_type": "exclusive_write", 00:13:33.275 "zoned": false, 00:13:33.275 "supported_io_types": { 00:13:33.275 "read": true, 00:13:33.275 "write": true, 00:13:33.275 "unmap": true, 00:13:33.275 "write_zeroes": true, 00:13:33.275 "flush": true, 00:13:33.275 "reset": true, 00:13:33.275 "compare": false, 00:13:33.275 "compare_and_write": false, 00:13:33.275 "abort": true, 00:13:33.275 "nvme_admin": false, 00:13:33.275 "nvme_io": false 00:13:33.275 }, 00:13:33.275 "memory_domains": [ 00:13:33.275 { 00:13:33.275 "dma_device_id": "system", 00:13:33.275 "dma_device_type": 1 00:13:33.275 }, 00:13:33.275 { 00:13:33.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.275 "dma_device_type": 2 00:13:33.275 } 00:13:33.275 ], 00:13:33.275 "driver_specific": { 00:13:33.275 "passthru": { 00:13:33.275 "name": "pt1", 00:13:33.275 "base_bdev_name": "malloc1" 00:13:33.275 } 00:13:33.275 } 00:13:33.275 }' 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:33.275 10:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:33.841 "name": "pt2", 00:13:33.841 "aliases": [ 00:13:33.841 "00000000-0000-0000-0000-000000000002" 00:13:33.841 ], 00:13:33.841 "product_name": "passthru", 00:13:33.841 "block_size": 512, 00:13:33.841 "num_blocks": 65536, 00:13:33.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.841 "assigned_rate_limits": { 00:13:33.841 "rw_ios_per_sec": 0, 00:13:33.841 "rw_mbytes_per_sec": 0, 00:13:33.841 "r_mbytes_per_sec": 0, 00:13:33.841 "w_mbytes_per_sec": 0 00:13:33.841 }, 00:13:33.841 "claimed": true, 00:13:33.841 "claim_type": "exclusive_write", 00:13:33.841 "zoned": false, 00:13:33.841 "supported_io_types": { 00:13:33.841 "read": true, 00:13:33.841 "write": true, 00:13:33.841 "unmap": true, 00:13:33.841 "write_zeroes": true, 00:13:33.841 "flush": true, 00:13:33.841 "reset": true, 00:13:33.841 "compare": false, 00:13:33.841 "compare_and_write": false, 00:13:33.841 "abort": true, 00:13:33.841 "nvme_admin": false, 00:13:33.841 "nvme_io": false 00:13:33.841 }, 00:13:33.841 "memory_domains": [ 00:13:33.841 { 00:13:33.841 "dma_device_id": "system", 00:13:33.841 "dma_device_type": 1 00:13:33.841 }, 00:13:33.841 { 00:13:33.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.841 "dma_device_type": 2 00:13:33.841 } 00:13:33.841 ], 00:13:33.841 "driver_specific": { 00:13:33.841 "passthru": { 00:13:33.841 "name": "pt2", 00:13:33.841 "base_bdev_name": "malloc2" 00:13:33.841 } 00:13:33.841 } 00:13:33.841 }' 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:33.841 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.099 "name": "pt3", 00:13:34.099 "aliases": [ 00:13:34.099 "00000000-0000-0000-0000-000000000003" 00:13:34.099 ], 00:13:34.099 "product_name": "passthru", 00:13:34.099 "block_size": 512, 00:13:34.099 "num_blocks": 65536, 00:13:34.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.099 "assigned_rate_limits": { 00:13:34.099 "rw_ios_per_sec": 0, 00:13:34.099 "rw_mbytes_per_sec": 0, 00:13:34.099 "r_mbytes_per_sec": 0, 00:13:34.099 "w_mbytes_per_sec": 0 00:13:34.099 }, 00:13:34.099 "claimed": true, 00:13:34.099 "claim_type": "exclusive_write", 00:13:34.099 "zoned": false, 00:13:34.099 "supported_io_types": { 00:13:34.099 "read": true, 00:13:34.099 "write": true, 00:13:34.099 "unmap": true, 00:13:34.099 "write_zeroes": true, 00:13:34.099 "flush": true, 00:13:34.099 "reset": true, 00:13:34.099 "compare": false, 00:13:34.099 "compare_and_write": false, 00:13:34.099 "abort": true, 00:13:34.099 "nvme_admin": false, 00:13:34.099 "nvme_io": false 00:13:34.099 }, 00:13:34.099 "memory_domains": [ 00:13:34.099 { 00:13:34.099 "dma_device_id": "system", 00:13:34.099 "dma_device_type": 1 00:13:34.099 }, 00:13:34.099 { 00:13:34.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.099 "dma_device_type": 2 00:13:34.099 } 00:13:34.099 ], 00:13:34.099 "driver_specific": { 00:13:34.099 "passthru": { 00:13:34.099 "name": "pt3", 00:13:34.099 "base_bdev_name": "malloc3" 00:13:34.099 } 00:13:34.099 } 00:13:34.099 }' 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:34.099 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:34.359 "name": "pt4", 00:13:34.359 "aliases": [ 00:13:34.359 "00000000-0000-0000-0000-000000000004" 00:13:34.359 ], 00:13:34.359 "product_name": "passthru", 00:13:34.359 "block_size": 512, 00:13:34.359 "num_blocks": 65536, 00:13:34.359 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.359 "assigned_rate_limits": { 00:13:34.359 "rw_ios_per_sec": 0, 00:13:34.359 "rw_mbytes_per_sec": 0, 00:13:34.359 "r_mbytes_per_sec": 0, 00:13:34.359 "w_mbytes_per_sec": 0 00:13:34.359 }, 00:13:34.359 "claimed": true, 00:13:34.359 "claim_type": "exclusive_write", 00:13:34.359 "zoned": false, 00:13:34.359 "supported_io_types": { 00:13:34.359 "read": true, 00:13:34.359 "write": true, 00:13:34.359 "unmap": true, 00:13:34.359 "write_zeroes": true, 00:13:34.359 "flush": true, 00:13:34.359 "reset": true, 00:13:34.359 "compare": false, 00:13:34.359 "compare_and_write": false, 00:13:34.359 "abort": true, 00:13:34.359 "nvme_admin": false, 00:13:34.359 "nvme_io": false 00:13:34.359 }, 00:13:34.359 "memory_domains": [ 00:13:34.359 { 00:13:34.359 "dma_device_id": "system", 00:13:34.359 "dma_device_type": 1 00:13:34.359 }, 00:13:34.359 { 00:13:34.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.359 "dma_device_type": 2 00:13:34.359 } 00:13:34.359 ], 00:13:34.359 "driver_specific": { 00:13:34.359 "passthru": { 00:13:34.359 "name": "pt4", 00:13:34.359 "base_bdev_name": "malloc4" 00:13:34.359 } 00:13:34.359 } 00:13:34.359 }' 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:34.359 10:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:34.670 [2024-06-10 10:17:40.105436] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.670 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a9586acf-2712-11ef-b084-113036b5c18d 00:13:34.670 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a9586acf-2712-11ef-b084-113036b5c18d ']' 00:13:34.670 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:34.957 [2024-06-10 10:17:40.325401] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.957 [2024-06-10 10:17:40.325416] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.957 [2024-06-10 10:17:40.325444] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.957 [2024-06-10 10:17:40.325458] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.957 [2024-06-10 10:17:40.325462] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abb5900 name raid_bdev1, state offline 00:13:34.957 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.957 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:35.215 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:35.215 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:35.215 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.215 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:35.215 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.215 10:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:35.474 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.474 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:35.732 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:35.732 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:35.989 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:35.989 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:36.248 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:36.506 [2024-06-10 10:17:41.945495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:36.506 [2024-06-10 10:17:41.945955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:36.506 [2024-06-10 10:17:41.945969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:36.506 [2024-06-10 10:17:41.945976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:36.506 [2024-06-10 10:17:41.946004] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:36.506 [2024-06-10 10:17:41.946038] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:36.506 [2024-06-10 10:17:41.946047] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:36.506 [2024-06-10 10:17:41.946056] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:36.506 [2024-06-10 10:17:41.946064] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.506 [2024-06-10 10:17:41.946068] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abb5680 name raid_bdev1, state configuring 00:13:36.506 request: 00:13:36.506 { 00:13:36.506 "name": "raid_bdev1", 00:13:36.506 "raid_level": "raid0", 00:13:36.506 "base_bdevs": [ 00:13:36.506 "malloc1", 00:13:36.506 "malloc2", 00:13:36.506 "malloc3", 00:13:36.506 "malloc4" 00:13:36.506 ], 00:13:36.506 "superblock": false, 00:13:36.506 "strip_size_kb": 64, 00:13:36.506 "method": "bdev_raid_create", 00:13:36.506 "req_id": 1 00:13:36.506 } 00:13:36.506 Got JSON-RPC error response 00:13:36.506 response: 00:13:36.506 { 00:13:36.506 "code": -17, 00:13:36.506 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:36.506 } 00:13:36.506 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:13:36.506 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:36.506 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:36.506 10:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:36.506 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.506 10:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:36.764 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:36.764 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:36.764 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.022 [2024-06-10 10:17:42.525502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.022 [2024-06-10 10:17:42.525546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.022 [2024-06-10 10:17:42.525556] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb5180 00:13:37.022 [2024-06-10 10:17:42.525564] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.022 [2024-06-10 10:17:42.526043] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.022 [2024-06-10 10:17:42.526071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.022 [2024-06-10 10:17:42.526091] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:37.022 [2024-06-10 10:17:42.526101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.022 pt1 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.022 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.280 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.280 "name": "raid_bdev1", 00:13:37.280 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:37.280 "strip_size_kb": 64, 00:13:37.280 "state": "configuring", 00:13:37.280 "raid_level": "raid0", 00:13:37.280 "superblock": true, 00:13:37.280 "num_base_bdevs": 4, 00:13:37.280 "num_base_bdevs_discovered": 1, 00:13:37.280 "num_base_bdevs_operational": 4, 00:13:37.280 "base_bdevs_list": [ 00:13:37.280 { 00:13:37.280 "name": "pt1", 00:13:37.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.280 "is_configured": true, 00:13:37.280 "data_offset": 2048, 00:13:37.280 "data_size": 63488 00:13:37.280 }, 00:13:37.280 { 00:13:37.280 "name": null, 00:13:37.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.280 "is_configured": false, 00:13:37.280 "data_offset": 2048, 00:13:37.280 "data_size": 63488 00:13:37.280 }, 00:13:37.280 { 00:13:37.280 "name": null, 00:13:37.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.280 "is_configured": false, 00:13:37.280 "data_offset": 2048, 00:13:37.280 "data_size": 63488 00:13:37.280 }, 00:13:37.280 { 00:13:37.280 "name": null, 00:13:37.280 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.280 "is_configured": false, 00:13:37.280 "data_offset": 2048, 00:13:37.280 "data_size": 63488 00:13:37.280 } 00:13:37.280 ] 00:13:37.280 }' 00:13:37.280 10:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.280 10:17:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.540 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:13:37.540 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.113 [2024-06-10 10:17:43.445589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.113 [2024-06-10 10:17:43.445646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.113 [2024-06-10 10:17:43.445657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb4780 00:13:38.113 [2024-06-10 10:17:43.445665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.113 [2024-06-10 10:17:43.445763] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.113 [2024-06-10 10:17:43.445772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.113 [2024-06-10 10:17:43.445791] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.113 [2024-06-10 10:17:43.445798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.113 pt2 00:13:38.113 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:38.113 [2024-06-10 10:17:43.701615] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:38.393 "name": "raid_bdev1", 00:13:38.393 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:38.393 "strip_size_kb": 64, 00:13:38.393 "state": "configuring", 00:13:38.393 "raid_level": "raid0", 00:13:38.393 "superblock": true, 00:13:38.393 "num_base_bdevs": 4, 00:13:38.393 "num_base_bdevs_discovered": 1, 00:13:38.393 "num_base_bdevs_operational": 4, 00:13:38.393 "base_bdevs_list": [ 00:13:38.393 { 00:13:38.393 "name": "pt1", 00:13:38.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.393 "is_configured": true, 00:13:38.393 "data_offset": 2048, 00:13:38.393 "data_size": 63488 00:13:38.393 }, 00:13:38.393 { 00:13:38.393 "name": null, 00:13:38.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.393 "is_configured": false, 00:13:38.393 "data_offset": 2048, 00:13:38.393 "data_size": 63488 00:13:38.393 }, 00:13:38.393 { 00:13:38.393 "name": null, 00:13:38.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.393 "is_configured": false, 00:13:38.393 "data_offset": 2048, 00:13:38.393 "data_size": 63488 00:13:38.393 }, 00:13:38.393 { 00:13:38.393 "name": null, 00:13:38.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.393 "is_configured": false, 00:13:38.393 "data_offset": 2048, 00:13:38.393 "data_size": 63488 00:13:38.393 } 00:13:38.393 ] 00:13:38.393 }' 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:38.393 10:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.961 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:38.961 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:38.961 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.961 [2024-06-10 10:17:44.509640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.961 [2024-06-10 10:17:44.509700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.961 [2024-06-10 10:17:44.509712] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb4780 00:13:38.961 [2024-06-10 10:17:44.509720] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.961 [2024-06-10 10:17:44.509819] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.961 [2024-06-10 10:17:44.509828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.961 [2024-06-10 10:17:44.509849] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.961 [2024-06-10 10:17:44.509863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.961 pt2 00:13:38.961 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:38.961 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:38.961 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:39.220 [2024-06-10 10:17:44.737640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:39.220 [2024-06-10 10:17:44.737696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.220 [2024-06-10 10:17:44.737706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb5b80 00:13:39.220 [2024-06-10 10:17:44.737714] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.220 [2024-06-10 10:17:44.737807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.220 [2024-06-10 10:17:44.737815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:39.220 [2024-06-10 10:17:44.737835] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:39.220 [2024-06-10 10:17:44.737842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:39.220 pt3 00:13:39.220 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:39.220 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:39.220 10:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:39.478 [2024-06-10 10:17:44.985666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:39.478 [2024-06-10 10:17:44.985730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.478 [2024-06-10 10:17:44.985743] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abb5900 00:13:39.478 [2024-06-10 10:17:44.985751] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.478 [2024-06-10 10:17:44.985850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.478 [2024-06-10 10:17:44.985859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:39.478 [2024-06-10 10:17:44.985882] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:39.478 [2024-06-10 10:17:44.985889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:39.478 [2024-06-10 10:17:44.985916] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abb4c80 00:13:39.478 [2024-06-10 10:17:44.985920] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:39.478 [2024-06-10 10:17:44.985940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac17e20 00:13:39.478 [2024-06-10 10:17:44.985986] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abb4c80 00:13:39.478 [2024-06-10 10:17:44.985990] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abb4c80 00:13:39.478 [2024-06-10 10:17:44.986007] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.478 pt4 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.478 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.736 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.736 "name": "raid_bdev1", 00:13:39.736 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:39.736 "strip_size_kb": 64, 00:13:39.736 "state": "online", 00:13:39.736 "raid_level": "raid0", 00:13:39.736 "superblock": true, 00:13:39.736 "num_base_bdevs": 4, 00:13:39.736 "num_base_bdevs_discovered": 4, 00:13:39.736 "num_base_bdevs_operational": 4, 00:13:39.736 "base_bdevs_list": [ 00:13:39.736 { 00:13:39.736 "name": "pt1", 00:13:39.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.736 "is_configured": true, 00:13:39.736 "data_offset": 2048, 00:13:39.736 "data_size": 63488 00:13:39.736 }, 00:13:39.736 { 00:13:39.736 "name": "pt2", 00:13:39.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.736 "is_configured": true, 00:13:39.736 "data_offset": 2048, 00:13:39.736 "data_size": 63488 00:13:39.736 }, 00:13:39.736 { 00:13:39.736 "name": "pt3", 00:13:39.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.736 "is_configured": true, 00:13:39.736 "data_offset": 2048, 00:13:39.736 "data_size": 63488 00:13:39.736 }, 00:13:39.736 { 00:13:39.736 "name": "pt4", 00:13:39.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:39.736 "is_configured": true, 00:13:39.736 "data_offset": 2048, 00:13:39.736 "data_size": 63488 00:13:39.736 } 00:13:39.736 ] 00:13:39.736 }' 00:13:39.736 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.736 10:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:39.994 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:40.597 [2024-06-10 10:17:45.861782] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:40.597 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:40.597 "name": "raid_bdev1", 00:13:40.597 "aliases": [ 00:13:40.597 "a9586acf-2712-11ef-b084-113036b5c18d" 00:13:40.597 ], 00:13:40.597 "product_name": "Raid Volume", 00:13:40.597 "block_size": 512, 00:13:40.597 "num_blocks": 253952, 00:13:40.597 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:40.597 "assigned_rate_limits": { 00:13:40.597 "rw_ios_per_sec": 0, 00:13:40.597 "rw_mbytes_per_sec": 0, 00:13:40.597 "r_mbytes_per_sec": 0, 00:13:40.597 "w_mbytes_per_sec": 0 00:13:40.597 }, 00:13:40.597 "claimed": false, 00:13:40.597 "zoned": false, 00:13:40.597 "supported_io_types": { 00:13:40.597 "read": true, 00:13:40.597 "write": true, 00:13:40.597 "unmap": true, 00:13:40.597 "write_zeroes": true, 00:13:40.597 "flush": true, 00:13:40.597 "reset": true, 00:13:40.597 "compare": false, 00:13:40.597 "compare_and_write": false, 00:13:40.597 "abort": false, 00:13:40.597 "nvme_admin": false, 00:13:40.597 "nvme_io": false 00:13:40.597 }, 00:13:40.597 "memory_domains": [ 00:13:40.597 { 00:13:40.597 "dma_device_id": "system", 00:13:40.597 "dma_device_type": 1 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.597 "dma_device_type": 2 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "system", 00:13:40.597 "dma_device_type": 1 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.597 "dma_device_type": 2 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "system", 00:13:40.597 "dma_device_type": 1 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.597 "dma_device_type": 2 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "system", 00:13:40.597 "dma_device_type": 1 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.597 "dma_device_type": 2 00:13:40.597 } 00:13:40.597 ], 00:13:40.597 "driver_specific": { 00:13:40.597 "raid": { 00:13:40.597 "uuid": "a9586acf-2712-11ef-b084-113036b5c18d", 00:13:40.597 "strip_size_kb": 64, 00:13:40.597 "state": "online", 00:13:40.597 "raid_level": "raid0", 00:13:40.597 "superblock": true, 00:13:40.597 "num_base_bdevs": 4, 00:13:40.597 "num_base_bdevs_discovered": 4, 00:13:40.597 "num_base_bdevs_operational": 4, 00:13:40.597 "base_bdevs_list": [ 00:13:40.597 { 00:13:40.597 "name": "pt1", 00:13:40.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.597 "is_configured": true, 00:13:40.597 "data_offset": 2048, 00:13:40.597 "data_size": 63488 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "name": "pt2", 00:13:40.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.597 "is_configured": true, 00:13:40.597 "data_offset": 2048, 00:13:40.597 "data_size": 63488 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "name": "pt3", 00:13:40.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.597 "is_configured": true, 00:13:40.597 "data_offset": 2048, 00:13:40.597 "data_size": 63488 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "name": "pt4", 00:13:40.597 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:40.597 "is_configured": true, 00:13:40.597 "data_offset": 2048, 00:13:40.597 "data_size": 63488 00:13:40.597 } 00:13:40.597 ] 00:13:40.597 } 00:13:40.597 } 00:13:40.597 }' 00:13:40.597 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:40.597 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:40.597 pt2 00:13:40.597 pt3 00:13:40.597 pt4' 00:13:40.597 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.597 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.597 10:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:40.597 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.597 "name": "pt1", 00:13:40.597 "aliases": [ 00:13:40.597 "00000000-0000-0000-0000-000000000001" 00:13:40.597 ], 00:13:40.597 "product_name": "passthru", 00:13:40.597 "block_size": 512, 00:13:40.597 "num_blocks": 65536, 00:13:40.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.597 "assigned_rate_limits": { 00:13:40.597 "rw_ios_per_sec": 0, 00:13:40.597 "rw_mbytes_per_sec": 0, 00:13:40.597 "r_mbytes_per_sec": 0, 00:13:40.597 "w_mbytes_per_sec": 0 00:13:40.597 }, 00:13:40.597 "claimed": true, 00:13:40.597 "claim_type": "exclusive_write", 00:13:40.597 "zoned": false, 00:13:40.597 "supported_io_types": { 00:13:40.597 "read": true, 00:13:40.597 "write": true, 00:13:40.597 "unmap": true, 00:13:40.597 "write_zeroes": true, 00:13:40.597 "flush": true, 00:13:40.597 "reset": true, 00:13:40.597 "compare": false, 00:13:40.597 "compare_and_write": false, 00:13:40.597 "abort": true, 00:13:40.597 "nvme_admin": false, 00:13:40.597 "nvme_io": false 00:13:40.597 }, 00:13:40.597 "memory_domains": [ 00:13:40.597 { 00:13:40.597 "dma_device_id": "system", 00:13:40.597 "dma_device_type": 1 00:13:40.597 }, 00:13:40.597 { 00:13:40.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.597 "dma_device_type": 2 00:13:40.597 } 00:13:40.597 ], 00:13:40.597 "driver_specific": { 00:13:40.597 "passthru": { 00:13:40.597 "name": "pt1", 00:13:40.597 "base_bdev_name": "malloc1" 00:13:40.597 } 00:13:40.597 } 00:13:40.597 }' 00:13:40.597 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.597 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:40.875 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:40.875 "name": "pt2", 00:13:40.875 "aliases": [ 00:13:40.875 "00000000-0000-0000-0000-000000000002" 00:13:40.875 ], 00:13:40.875 "product_name": "passthru", 00:13:40.875 "block_size": 512, 00:13:40.875 "num_blocks": 65536, 00:13:40.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.875 "assigned_rate_limits": { 00:13:40.875 "rw_ios_per_sec": 0, 00:13:40.875 "rw_mbytes_per_sec": 0, 00:13:40.876 "r_mbytes_per_sec": 0, 00:13:40.876 "w_mbytes_per_sec": 0 00:13:40.876 }, 00:13:40.876 "claimed": true, 00:13:40.876 "claim_type": "exclusive_write", 00:13:40.876 "zoned": false, 00:13:40.876 "supported_io_types": { 00:13:40.876 "read": true, 00:13:40.876 "write": true, 00:13:40.876 "unmap": true, 00:13:40.876 "write_zeroes": true, 00:13:40.876 "flush": true, 00:13:40.876 "reset": true, 00:13:40.876 "compare": false, 00:13:40.876 "compare_and_write": false, 00:13:40.876 "abort": true, 00:13:40.876 "nvme_admin": false, 00:13:40.876 "nvme_io": false 00:13:40.876 }, 00:13:40.876 "memory_domains": [ 00:13:40.876 { 00:13:40.876 "dma_device_id": "system", 00:13:40.876 "dma_device_type": 1 00:13:40.876 }, 00:13:40.876 { 00:13:40.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.876 "dma_device_type": 2 00:13:40.876 } 00:13:40.876 ], 00:13:40.876 "driver_specific": { 00:13:40.876 "passthru": { 00:13:40.876 "name": "pt2", 00:13:40.876 "base_bdev_name": "malloc2" 00:13:40.876 } 00:13:40.876 } 00:13:40.876 }' 00:13:40.876 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.876 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:40.876 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:40.876 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:40.876 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.134 "name": "pt3", 00:13:41.134 "aliases": [ 00:13:41.134 "00000000-0000-0000-0000-000000000003" 00:13:41.134 ], 00:13:41.134 "product_name": "passthru", 00:13:41.134 "block_size": 512, 00:13:41.134 "num_blocks": 65536, 00:13:41.134 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.134 "assigned_rate_limits": { 00:13:41.134 "rw_ios_per_sec": 0, 00:13:41.134 "rw_mbytes_per_sec": 0, 00:13:41.134 "r_mbytes_per_sec": 0, 00:13:41.134 "w_mbytes_per_sec": 0 00:13:41.134 }, 00:13:41.134 "claimed": true, 00:13:41.134 "claim_type": "exclusive_write", 00:13:41.134 "zoned": false, 00:13:41.134 "supported_io_types": { 00:13:41.134 "read": true, 00:13:41.134 "write": true, 00:13:41.134 "unmap": true, 00:13:41.134 "write_zeroes": true, 00:13:41.134 "flush": true, 00:13:41.134 "reset": true, 00:13:41.134 "compare": false, 00:13:41.134 "compare_and_write": false, 00:13:41.134 "abort": true, 00:13:41.134 "nvme_admin": false, 00:13:41.134 "nvme_io": false 00:13:41.134 }, 00:13:41.134 "memory_domains": [ 00:13:41.134 { 00:13:41.134 "dma_device_id": "system", 00:13:41.134 "dma_device_type": 1 00:13:41.134 }, 00:13:41.134 { 00:13:41.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.134 "dma_device_type": 2 00:13:41.134 } 00:13:41.134 ], 00:13:41.134 "driver_specific": { 00:13:41.134 "passthru": { 00:13:41.134 "name": "pt3", 00:13:41.134 "base_bdev_name": "malloc3" 00:13:41.134 } 00:13:41.134 } 00:13:41.134 }' 00:13:41.134 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:41.393 10:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:41.651 "name": "pt4", 00:13:41.651 "aliases": [ 00:13:41.651 "00000000-0000-0000-0000-000000000004" 00:13:41.651 ], 00:13:41.651 "product_name": "passthru", 00:13:41.651 "block_size": 512, 00:13:41.651 "num_blocks": 65536, 00:13:41.651 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:41.651 "assigned_rate_limits": { 00:13:41.651 "rw_ios_per_sec": 0, 00:13:41.651 "rw_mbytes_per_sec": 0, 00:13:41.651 "r_mbytes_per_sec": 0, 00:13:41.651 "w_mbytes_per_sec": 0 00:13:41.651 }, 00:13:41.651 "claimed": true, 00:13:41.651 "claim_type": "exclusive_write", 00:13:41.651 "zoned": false, 00:13:41.651 "supported_io_types": { 00:13:41.651 "read": true, 00:13:41.651 "write": true, 00:13:41.651 "unmap": true, 00:13:41.651 "write_zeroes": true, 00:13:41.651 "flush": true, 00:13:41.651 "reset": true, 00:13:41.651 "compare": false, 00:13:41.651 "compare_and_write": false, 00:13:41.651 "abort": true, 00:13:41.651 "nvme_admin": false, 00:13:41.651 "nvme_io": false 00:13:41.651 }, 00:13:41.651 "memory_domains": [ 00:13:41.651 { 00:13:41.651 "dma_device_id": "system", 00:13:41.651 "dma_device_type": 1 00:13:41.651 }, 00:13:41.651 { 00:13:41.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.651 "dma_device_type": 2 00:13:41.651 } 00:13:41.651 ], 00:13:41.651 "driver_specific": { 00:13:41.651 "passthru": { 00:13:41.651 "name": "pt4", 00:13:41.651 "base_bdev_name": "malloc4" 00:13:41.651 } 00:13:41.651 } 00:13:41.651 }' 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.651 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:41.652 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:41.652 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:41.652 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:41.911 [2024-06-10 10:17:47.325792] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a9586acf-2712-11ef-b084-113036b5c18d '!=' a9586acf-2712-11ef-b084-113036b5c18d ']' 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 60806 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 60806 ']' 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 60806 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 60806 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:13:41.911 killing process with pid 60806 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 60806' 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 60806 00:13:41.911 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 60806 00:13:41.911 [2024-06-10 10:17:47.355825] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.911 [2024-06-10 10:17:47.355879] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.911 [2024-06-10 10:17:47.355905] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.911 [2024-06-10 10:17:47.355916] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abb4c80 name raid_bdev1, state offline 00:13:41.911 [2024-06-10 10:17:47.375293] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.170 10:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:42.170 00:13:42.170 real 0m13.256s 00:13:42.170 user 0m23.618s 00:13:42.171 sys 0m2.109s 00:13:42.171 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:42.171 10:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.171 ************************************ 00:13:42.171 END TEST raid_superblock_test 00:13:42.171 ************************************ 00:13:42.171 10:17:47 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:42.171 10:17:47 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:42.171 10:17:47 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:42.171 10:17:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:42.171 ************************************ 00:13:42.171 START TEST raid_read_error_test 00:13:42.171 ************************************ 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 4 read 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.VrR09Bzz 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=61203 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 61203 /var/tmp/spdk-raid.sock 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 61203 ']' 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:42.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:42.171 10:17:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.171 [2024-06-10 10:17:47.604175] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:42.171 [2024-06-10 10:17:47.604387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:42.739 EAL: TSC is not safe to use in SMP mode 00:13:42.739 EAL: TSC is not invariant 00:13:42.739 [2024-06-10 10:17:48.077890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.739 [2024-06-10 10:17:48.159258] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:42.739 [2024-06-10 10:17:48.161666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.739 [2024-06-10 10:17:48.162448] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.739 [2024-06-10 10:17:48.162463] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.998 10:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:42.998 10:17:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:13:42.998 10:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:42.998 10:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:43.256 BaseBdev1_malloc 00:13:43.256 10:17:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:43.514 true 00:13:43.514 10:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:43.773 [2024-06-10 10:17:49.321376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:43.773 [2024-06-10 10:17:49.321474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.773 [2024-06-10 10:17:49.321503] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7df780 00:13:43.773 [2024-06-10 10:17:49.321511] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.773 [2024-06-10 10:17:49.322083] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.773 [2024-06-10 10:17:49.322116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:43.773 BaseBdev1 00:13:43.773 10:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:43.773 10:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.032 BaseBdev2_malloc 00:13:44.290 10:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:44.290 true 00:13:44.290 10:17:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:44.548 [2024-06-10 10:17:50.137423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:44.548 [2024-06-10 10:17:50.137506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.548 [2024-06-10 10:17:50.137541] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7dfc80 00:13:44.548 [2024-06-10 10:17:50.137569] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.548 [2024-06-10 10:17:50.138226] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.548 [2024-06-10 10:17:50.138281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.548 BaseBdev2 00:13:44.806 10:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:44.806 10:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:45.064 BaseBdev3_malloc 00:13:45.064 10:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:45.322 true 00:13:45.322 10:17:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:45.580 [2024-06-10 10:17:51.149441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:45.580 [2024-06-10 10:17:51.149500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.580 [2024-06-10 10:17:51.149529] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7e0180 00:13:45.580 [2024-06-10 10:17:51.149537] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.580 [2024-06-10 10:17:51.150096] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.580 [2024-06-10 10:17:51.150125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:45.580 BaseBdev3 00:13:45.580 10:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:45.580 10:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:46.146 BaseBdev4_malloc 00:13:46.146 10:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:46.146 true 00:13:46.146 10:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:46.404 [2024-06-10 10:17:51.945477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:46.404 [2024-06-10 10:17:51.945541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.404 [2024-06-10 10:17:51.945569] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c7e0680 00:13:46.404 [2024-06-10 10:17:51.945588] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.404 [2024-06-10 10:17:51.946164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.404 [2024-06-10 10:17:51.946195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:46.404 BaseBdev4 00:13:46.404 10:17:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:46.973 [2024-06-10 10:17:52.269493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.973 [2024-06-10 10:17:52.269988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.973 [2024-06-10 10:17:52.270017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.973 [2024-06-10 10:17:52.270029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.973 [2024-06-10 10:17:52.270093] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c7e0900 00:13:46.973 [2024-06-10 10:17:52.270098] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:46.973 [2024-06-10 10:17:52.270134] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c84be20 00:13:46.973 [2024-06-10 10:17:52.270194] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c7e0900 00:13:46.973 [2024-06-10 10:17:52.270198] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c7e0900 00:13:46.973 [2024-06-10 10:17:52.270221] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.973 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.265 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.265 "name": "raid_bdev1", 00:13:47.265 "uuid": "b219a6c3-2712-11ef-b084-113036b5c18d", 00:13:47.265 "strip_size_kb": 64, 00:13:47.265 "state": "online", 00:13:47.265 "raid_level": "raid0", 00:13:47.265 "superblock": true, 00:13:47.265 "num_base_bdevs": 4, 00:13:47.265 "num_base_bdevs_discovered": 4, 00:13:47.265 "num_base_bdevs_operational": 4, 00:13:47.265 "base_bdevs_list": [ 00:13:47.265 { 00:13:47.265 "name": "BaseBdev1", 00:13:47.265 "uuid": "e3387d3c-87db-f650-9301-188314232d0d", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev2", 00:13:47.265 "uuid": "37c2ac1b-5e7a-a356-a516-ad195d39b67d", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev3", 00:13:47.265 "uuid": "1c5f05ac-a565-325d-ada0-3c842aceaef8", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 }, 00:13:47.265 { 00:13:47.265 "name": "BaseBdev4", 00:13:47.265 "uuid": "17c0bb3f-08a1-4f5c-8f72-a5c98b75993a", 00:13:47.265 "is_configured": true, 00:13:47.265 "data_offset": 2048, 00:13:47.265 "data_size": 63488 00:13:47.265 } 00:13:47.265 ] 00:13:47.265 }' 00:13:47.265 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.265 10:17:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.545 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:47.545 10:17:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:47.545 [2024-06-10 10:17:53.093605] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c84bec0 00:13:48.481 10:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.739 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.998 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.998 "name": "raid_bdev1", 00:13:48.998 "uuid": "b219a6c3-2712-11ef-b084-113036b5c18d", 00:13:48.998 "strip_size_kb": 64, 00:13:48.998 "state": "online", 00:13:48.998 "raid_level": "raid0", 00:13:48.998 "superblock": true, 00:13:48.998 "num_base_bdevs": 4, 00:13:48.998 "num_base_bdevs_discovered": 4, 00:13:48.998 "num_base_bdevs_operational": 4, 00:13:48.998 "base_bdevs_list": [ 00:13:48.998 { 00:13:48.998 "name": "BaseBdev1", 00:13:48.998 "uuid": "e3387d3c-87db-f650-9301-188314232d0d", 00:13:48.998 "is_configured": true, 00:13:48.998 "data_offset": 2048, 00:13:48.998 "data_size": 63488 00:13:48.998 }, 00:13:48.998 { 00:13:48.998 "name": "BaseBdev2", 00:13:48.998 "uuid": "37c2ac1b-5e7a-a356-a516-ad195d39b67d", 00:13:48.998 "is_configured": true, 00:13:48.998 "data_offset": 2048, 00:13:48.998 "data_size": 63488 00:13:48.998 }, 00:13:48.998 { 00:13:48.998 "name": "BaseBdev3", 00:13:48.998 "uuid": "1c5f05ac-a565-325d-ada0-3c842aceaef8", 00:13:48.998 "is_configured": true, 00:13:48.998 "data_offset": 2048, 00:13:48.998 "data_size": 63488 00:13:48.998 }, 00:13:48.998 { 00:13:48.998 "name": "BaseBdev4", 00:13:48.998 "uuid": "17c0bb3f-08a1-4f5c-8f72-a5c98b75993a", 00:13:48.998 "is_configured": true, 00:13:48.998 "data_offset": 2048, 00:13:48.998 "data_size": 63488 00:13:48.998 } 00:13:48.998 ] 00:13:48.998 }' 00:13:48.998 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.998 10:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.565 10:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:49.824 [2024-06-10 10:17:55.187437] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.824 [2024-06-10 10:17:55.187469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.824 [2024-06-10 10:17:55.187740] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.824 [2024-06-10 10:17:55.187748] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.824 [2024-06-10 10:17:55.187757] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.824 [2024-06-10 10:17:55.187761] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c7e0900 name raid_bdev1, state offline 00:13:49.824 0 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 61203 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 61203 ']' 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 61203 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 61203 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:13:49.824 killing process with pid 61203 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 61203' 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 61203 00:13:49.824 [2024-06-10 10:17:55.215814] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 61203 00:13:49.824 [2024-06-10 10:17:55.235266] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.VrR09Bzz 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:13:49.824 00:13:49.824 real 0m7.829s 00:13:49.824 user 0m12.849s 00:13:49.824 sys 0m1.118s 00:13:49.824 ************************************ 00:13:49.824 END TEST raid_read_error_test 00:13:49.824 ************************************ 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:49.824 10:17:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.083 10:17:55 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:50.083 10:17:55 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:50.083 10:17:55 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:50.083 10:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.083 ************************************ 00:13:50.083 START TEST raid_write_error_test 00:13:50.083 ************************************ 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid0 4 write 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:50.083 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.W0Pstatk 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=61345 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 61345 /var/tmp/spdk-raid.sock 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 61345 ']' 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:50.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:50.084 10:17:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.084 [2024-06-10 10:17:55.479685] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:50.084 [2024-06-10 10:17:55.479894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:50.342 EAL: TSC is not safe to use in SMP mode 00:13:50.342 EAL: TSC is not invariant 00:13:50.342 [2024-06-10 10:17:55.934889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.637 [2024-06-10 10:17:56.014064] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:50.638 [2024-06-10 10:17:56.016190] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.638 [2024-06-10 10:17:56.016897] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.638 [2024-06-10 10:17:56.016908] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.209 10:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:51.209 10:17:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:13:51.209 10:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:51.209 10:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.209 BaseBdev1_malloc 00:13:51.209 10:17:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:51.467 true 00:13:51.468 10:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:51.727 [2024-06-10 10:17:57.211790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:51.727 [2024-06-10 10:17:57.211858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.727 [2024-06-10 10:17:57.211885] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f8780 00:13:51.727 [2024-06-10 10:17:57.211892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.727 [2024-06-10 10:17:57.212398] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.727 [2024-06-10 10:17:57.212421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.727 BaseBdev1 00:13:51.727 10:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:51.727 10:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.985 BaseBdev2_malloc 00:13:51.985 10:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:52.243 true 00:13:52.243 10:17:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:52.501 [2024-06-10 10:17:57.991814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:52.501 [2024-06-10 10:17:57.991875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.501 [2024-06-10 10:17:57.991904] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f8c80 00:13:52.501 [2024-06-10 10:17:57.991912] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.501 [2024-06-10 10:17:57.992458] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.501 [2024-06-10 10:17:57.992482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.501 BaseBdev2 00:13:52.501 10:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:52.501 10:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.760 BaseBdev3_malloc 00:13:52.760 10:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:53.018 true 00:13:53.018 10:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:53.277 [2024-06-10 10:17:58.867854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:53.277 [2024-06-10 10:17:58.867913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.277 [2024-06-10 10:17:58.867939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f9180 00:13:53.277 [2024-06-10 10:17:58.867946] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.277 [2024-06-10 10:17:58.868465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.277 [2024-06-10 10:17:58.868488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.277 BaseBdev3 00:13:53.536 10:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.536 10:17:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.536 BaseBdev4_malloc 00:13:53.536 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:53.794 true 00:13:53.794 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:54.058 [2024-06-10 10:17:59.571918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:54.058 [2024-06-10 10:17:59.572001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.058 [2024-06-10 10:17:59.572045] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a9f9680 00:13:54.058 [2024-06-10 10:17:59.572067] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.058 [2024-06-10 10:17:59.572754] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.058 [2024-06-10 10:17:59.572803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:54.058 BaseBdev4 00:13:54.058 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:54.336 [2024-06-10 10:17:59.815940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.336 [2024-06-10 10:17:59.816430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.336 [2024-06-10 10:17:59.816457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.336 [2024-06-10 10:17:59.816471] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.336 [2024-06-10 10:17:59.816528] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a9f9900 00:13:54.336 [2024-06-10 10:17:59.816533] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:54.336 [2024-06-10 10:17:59.816568] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa64e20 00:13:54.336 [2024-06-10 10:17:59.816629] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a9f9900 00:13:54.336 [2024-06-10 10:17:59.816633] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a9f9900 00:13:54.336 [2024-06-10 10:17:59.816658] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.336 10:17:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.594 10:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.594 "name": "raid_bdev1", 00:13:54.594 "uuid": "b69925be-2712-11ef-b084-113036b5c18d", 00:13:54.594 "strip_size_kb": 64, 00:13:54.594 "state": "online", 00:13:54.594 "raid_level": "raid0", 00:13:54.594 "superblock": true, 00:13:54.594 "num_base_bdevs": 4, 00:13:54.594 "num_base_bdevs_discovered": 4, 00:13:54.594 "num_base_bdevs_operational": 4, 00:13:54.594 "base_bdevs_list": [ 00:13:54.594 { 00:13:54.594 "name": "BaseBdev1", 00:13:54.594 "uuid": "6ef2b1ba-19f1-9357-bdb4-0dc072534b07", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev2", 00:13:54.594 "uuid": "99c8c329-c8af-c55b-ab89-addde398a0e5", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev3", 00:13:54.594 "uuid": "e193603a-fe5e-5e55-84fb-847b673efa42", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 }, 00:13:54.594 { 00:13:54.594 "name": "BaseBdev4", 00:13:54.594 "uuid": "4c3f478b-534a-4558-aed8-9599ece3fb63", 00:13:54.594 "is_configured": true, 00:13:54.594 "data_offset": 2048, 00:13:54.594 "data_size": 63488 00:13:54.594 } 00:13:54.594 ] 00:13:54.594 }' 00:13:54.594 10:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.594 10:18:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.852 10:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:54.853 10:18:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:55.112 [2024-06-10 10:18:00.572042] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82aa64ec0 00:13:56.049 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.307 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.308 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.308 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.308 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.308 10:18:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.567 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.567 "name": "raid_bdev1", 00:13:56.567 "uuid": "b69925be-2712-11ef-b084-113036b5c18d", 00:13:56.567 "strip_size_kb": 64, 00:13:56.567 "state": "online", 00:13:56.567 "raid_level": "raid0", 00:13:56.567 "superblock": true, 00:13:56.567 "num_base_bdevs": 4, 00:13:56.567 "num_base_bdevs_discovered": 4, 00:13:56.567 "num_base_bdevs_operational": 4, 00:13:56.567 "base_bdevs_list": [ 00:13:56.567 { 00:13:56.567 "name": "BaseBdev1", 00:13:56.567 "uuid": "6ef2b1ba-19f1-9357-bdb4-0dc072534b07", 00:13:56.567 "is_configured": true, 00:13:56.567 "data_offset": 2048, 00:13:56.567 "data_size": 63488 00:13:56.567 }, 00:13:56.567 { 00:13:56.567 "name": "BaseBdev2", 00:13:56.567 "uuid": "99c8c329-c8af-c55b-ab89-addde398a0e5", 00:13:56.567 "is_configured": true, 00:13:56.567 "data_offset": 2048, 00:13:56.567 "data_size": 63488 00:13:56.567 }, 00:13:56.567 { 00:13:56.567 "name": "BaseBdev3", 00:13:56.567 "uuid": "e193603a-fe5e-5e55-84fb-847b673efa42", 00:13:56.567 "is_configured": true, 00:13:56.567 "data_offset": 2048, 00:13:56.567 "data_size": 63488 00:13:56.567 }, 00:13:56.567 { 00:13:56.567 "name": "BaseBdev4", 00:13:56.567 "uuid": "4c3f478b-534a-4558-aed8-9599ece3fb63", 00:13:56.567 "is_configured": true, 00:13:56.567 "data_offset": 2048, 00:13:56.567 "data_size": 63488 00:13:56.567 } 00:13:56.567 ] 00:13:56.567 }' 00:13:56.567 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.567 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.826 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:57.132 [2024-06-10 10:18:02.637307] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.132 [2024-06-10 10:18:02.637337] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.132 [2024-06-10 10:18:02.637628] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.132 [2024-06-10 10:18:02.637637] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.132 [2024-06-10 10:18:02.637646] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.132 [2024-06-10 10:18:02.637650] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a9f9900 name raid_bdev1, state offline 00:13:57.132 0 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 61345 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 61345 ']' 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 61345 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 61345 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:13:57.132 killing process with pid 61345 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 61345' 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 61345 00:13:57.132 [2024-06-10 10:18:02.665973] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.132 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 61345 00:13:57.132 [2024-06-10 10:18:02.685374] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.W0Pstatk 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:13:57.391 00:13:57.391 real 0m7.399s 00:13:57.391 user 0m11.913s 00:13:57.391 sys 0m1.165s 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:57.391 10:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.391 ************************************ 00:13:57.391 END TEST raid_write_error_test 00:13:57.391 ************************************ 00:13:57.391 10:18:02 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:57.391 10:18:02 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:57.391 10:18:02 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:13:57.391 10:18:02 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:57.391 10:18:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.391 ************************************ 00:13:57.391 START TEST raid_state_function_test 00:13:57.391 ************************************ 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 4 false 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=61481 00:13:57.391 Process raid pid: 61481 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61481' 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 61481 /var/tmp/spdk-raid.sock 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:57.391 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 61481 ']' 00:13:57.392 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:57.392 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:57.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:57.392 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:57.392 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:57.392 10:18:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.392 [2024-06-10 10:18:02.921810] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:13:57.392 [2024-06-10 10:18:02.922043] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:57.959 EAL: TSC is not safe to use in SMP mode 00:13:57.959 EAL: TSC is not invariant 00:13:57.959 [2024-06-10 10:18:03.392343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.959 [2024-06-10 10:18:03.472992] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:57.959 [2024-06-10 10:18:03.475231] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.959 [2024-06-10 10:18:03.475947] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.959 [2024-06-10 10:18:03.475960] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.543 10:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:58.543 10:18:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:13:58.543 10:18:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:58.801 [2024-06-10 10:18:04.183150] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.801 [2024-06-10 10:18:04.183233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.801 [2024-06-10 10:18:04.183238] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.801 [2024-06-10 10:18:04.183246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.801 [2024-06-10 10:18:04.183249] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:58.801 [2024-06-10 10:18:04.183257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:58.801 [2024-06-10 10:18:04.183260] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:58.801 [2024-06-10 10:18:04.183267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.801 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.060 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.060 "name": "Existed_Raid", 00:13:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.060 "strip_size_kb": 64, 00:13:59.060 "state": "configuring", 00:13:59.060 "raid_level": "concat", 00:13:59.060 "superblock": false, 00:13:59.060 "num_base_bdevs": 4, 00:13:59.060 "num_base_bdevs_discovered": 0, 00:13:59.060 "num_base_bdevs_operational": 4, 00:13:59.060 "base_bdevs_list": [ 00:13:59.060 { 00:13:59.060 "name": "BaseBdev1", 00:13:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.060 "is_configured": false, 00:13:59.060 "data_offset": 0, 00:13:59.060 "data_size": 0 00:13:59.060 }, 00:13:59.060 { 00:13:59.060 "name": "BaseBdev2", 00:13:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.060 "is_configured": false, 00:13:59.060 "data_offset": 0, 00:13:59.060 "data_size": 0 00:13:59.060 }, 00:13:59.060 { 00:13:59.060 "name": "BaseBdev3", 00:13:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.060 "is_configured": false, 00:13:59.060 "data_offset": 0, 00:13:59.060 "data_size": 0 00:13:59.060 }, 00:13:59.060 { 00:13:59.060 "name": "BaseBdev4", 00:13:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.060 "is_configured": false, 00:13:59.060 "data_offset": 0, 00:13:59.060 "data_size": 0 00:13:59.060 } 00:13:59.060 ] 00:13:59.060 }' 00:13:59.060 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.060 10:18:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.319 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.578 [2024-06-10 10:18:04.931228] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.578 [2024-06-10 10:18:04.931270] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5a8500 name Existed_Raid, state configuring 00:13:59.578 10:18:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:59.837 [2024-06-10 10:18:05.259236] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.837 [2024-06-10 10:18:05.259286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.837 [2024-06-10 10:18:05.259290] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.837 [2024-06-10 10:18:05.259298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.837 [2024-06-10 10:18:05.259301] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:59.837 [2024-06-10 10:18:05.259308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:59.837 [2024-06-10 10:18:05.259311] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:59.837 [2024-06-10 10:18:05.259342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:59.837 10:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.096 [2024-06-10 10:18:05.560220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.096 BaseBdev1 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:00.096 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.354 10:18:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.612 [ 00:14:00.612 { 00:14:00.612 "name": "BaseBdev1", 00:14:00.612 "aliases": [ 00:14:00.612 "ba058318-2712-11ef-b084-113036b5c18d" 00:14:00.612 ], 00:14:00.612 "product_name": "Malloc disk", 00:14:00.612 "block_size": 512, 00:14:00.612 "num_blocks": 65536, 00:14:00.612 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:00.612 "assigned_rate_limits": { 00:14:00.612 "rw_ios_per_sec": 0, 00:14:00.612 "rw_mbytes_per_sec": 0, 00:14:00.612 "r_mbytes_per_sec": 0, 00:14:00.612 "w_mbytes_per_sec": 0 00:14:00.612 }, 00:14:00.612 "claimed": true, 00:14:00.612 "claim_type": "exclusive_write", 00:14:00.612 "zoned": false, 00:14:00.612 "supported_io_types": { 00:14:00.612 "read": true, 00:14:00.612 "write": true, 00:14:00.612 "unmap": true, 00:14:00.612 "write_zeroes": true, 00:14:00.612 "flush": true, 00:14:00.612 "reset": true, 00:14:00.612 "compare": false, 00:14:00.612 "compare_and_write": false, 00:14:00.612 "abort": true, 00:14:00.612 "nvme_admin": false, 00:14:00.612 "nvme_io": false 00:14:00.612 }, 00:14:00.612 "memory_domains": [ 00:14:00.612 { 00:14:00.612 "dma_device_id": "system", 00:14:00.612 "dma_device_type": 1 00:14:00.612 }, 00:14:00.612 { 00:14:00.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.612 "dma_device_type": 2 00:14:00.612 } 00:14:00.612 ], 00:14:00.612 "driver_specific": {} 00:14:00.612 } 00:14:00.612 ] 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.612 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.871 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.871 "name": "Existed_Raid", 00:14:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.871 "strip_size_kb": 64, 00:14:00.871 "state": "configuring", 00:14:00.871 "raid_level": "concat", 00:14:00.871 "superblock": false, 00:14:00.871 "num_base_bdevs": 4, 00:14:00.871 "num_base_bdevs_discovered": 1, 00:14:00.871 "num_base_bdevs_operational": 4, 00:14:00.871 "base_bdevs_list": [ 00:14:00.871 { 00:14:00.871 "name": "BaseBdev1", 00:14:00.871 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:00.871 "is_configured": true, 00:14:00.871 "data_offset": 0, 00:14:00.871 "data_size": 65536 00:14:00.871 }, 00:14:00.871 { 00:14:00.871 "name": "BaseBdev2", 00:14:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.871 "is_configured": false, 00:14:00.871 "data_offset": 0, 00:14:00.871 "data_size": 0 00:14:00.871 }, 00:14:00.871 { 00:14:00.871 "name": "BaseBdev3", 00:14:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.871 "is_configured": false, 00:14:00.871 "data_offset": 0, 00:14:00.871 "data_size": 0 00:14:00.871 }, 00:14:00.871 { 00:14:00.871 "name": "BaseBdev4", 00:14:00.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.871 "is_configured": false, 00:14:00.871 "data_offset": 0, 00:14:00.871 "data_size": 0 00:14:00.871 } 00:14:00.871 ] 00:14:00.871 }' 00:14:00.871 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.871 10:18:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.131 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:01.389 [2024-06-10 10:18:06.843342] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.389 [2024-06-10 10:18:06.843377] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5a8500 name Existed_Raid, state configuring 00:14:01.389 10:18:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:01.647 [2024-06-10 10:18:07.139360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.647 [2024-06-10 10:18:07.140056] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.647 [2024-06-10 10:18:07.140098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.647 [2024-06-10 10:18:07.140103] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.647 [2024-06-10 10:18:07.140111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.647 [2024-06-10 10:18:07.140114] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:01.647 [2024-06-10 10:18:07.140122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.647 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.905 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.905 "name": "Existed_Raid", 00:14:01.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.905 "strip_size_kb": 64, 00:14:01.905 "state": "configuring", 00:14:01.905 "raid_level": "concat", 00:14:01.905 "superblock": false, 00:14:01.905 "num_base_bdevs": 4, 00:14:01.905 "num_base_bdevs_discovered": 1, 00:14:01.905 "num_base_bdevs_operational": 4, 00:14:01.905 "base_bdevs_list": [ 00:14:01.905 { 00:14:01.905 "name": "BaseBdev1", 00:14:01.905 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:01.905 "is_configured": true, 00:14:01.905 "data_offset": 0, 00:14:01.905 "data_size": 65536 00:14:01.905 }, 00:14:01.905 { 00:14:01.905 "name": "BaseBdev2", 00:14:01.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.905 "is_configured": false, 00:14:01.905 "data_offset": 0, 00:14:01.905 "data_size": 0 00:14:01.905 }, 00:14:01.905 { 00:14:01.905 "name": "BaseBdev3", 00:14:01.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.905 "is_configured": false, 00:14:01.905 "data_offset": 0, 00:14:01.905 "data_size": 0 00:14:01.905 }, 00:14:01.905 { 00:14:01.905 "name": "BaseBdev4", 00:14:01.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.905 "is_configured": false, 00:14:01.905 "data_offset": 0, 00:14:01.905 "data_size": 0 00:14:01.905 } 00:14:01.905 ] 00:14:01.905 }' 00:14:01.905 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.905 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.163 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.421 [2024-06-10 10:18:07.955495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.421 BaseBdev2 00:14:02.421 10:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:02.421 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:02.421 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:02.421 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:02.421 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:02.421 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:02.422 10:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:02.988 10:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:02.988 [ 00:14:02.988 { 00:14:02.988 "name": "BaseBdev2", 00:14:02.988 "aliases": [ 00:14:02.988 "bb732177-2712-11ef-b084-113036b5c18d" 00:14:02.988 ], 00:14:02.988 "product_name": "Malloc disk", 00:14:02.988 "block_size": 512, 00:14:02.988 "num_blocks": 65536, 00:14:02.988 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:02.988 "assigned_rate_limits": { 00:14:02.989 "rw_ios_per_sec": 0, 00:14:02.989 "rw_mbytes_per_sec": 0, 00:14:02.989 "r_mbytes_per_sec": 0, 00:14:02.989 "w_mbytes_per_sec": 0 00:14:02.989 }, 00:14:02.989 "claimed": true, 00:14:02.989 "claim_type": "exclusive_write", 00:14:02.989 "zoned": false, 00:14:02.989 "supported_io_types": { 00:14:02.989 "read": true, 00:14:02.989 "write": true, 00:14:02.989 "unmap": true, 00:14:02.989 "write_zeroes": true, 00:14:02.989 "flush": true, 00:14:02.989 "reset": true, 00:14:02.989 "compare": false, 00:14:02.989 "compare_and_write": false, 00:14:02.989 "abort": true, 00:14:02.989 "nvme_admin": false, 00:14:02.989 "nvme_io": false 00:14:02.989 }, 00:14:02.989 "memory_domains": [ 00:14:02.989 { 00:14:02.989 "dma_device_id": "system", 00:14:02.989 "dma_device_type": 1 00:14:02.989 }, 00:14:02.989 { 00:14:02.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.989 "dma_device_type": 2 00:14:02.989 } 00:14:02.989 ], 00:14:02.989 "driver_specific": {} 00:14:02.989 } 00:14:02.989 ] 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.989 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.247 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.247 "name": "Existed_Raid", 00:14:03.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.247 "strip_size_kb": 64, 00:14:03.247 "state": "configuring", 00:14:03.247 "raid_level": "concat", 00:14:03.247 "superblock": false, 00:14:03.247 "num_base_bdevs": 4, 00:14:03.247 "num_base_bdevs_discovered": 2, 00:14:03.247 "num_base_bdevs_operational": 4, 00:14:03.247 "base_bdevs_list": [ 00:14:03.247 { 00:14:03.247 "name": "BaseBdev1", 00:14:03.247 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 65536 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": "BaseBdev2", 00:14:03.247 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 65536 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": "BaseBdev3", 00:14:03.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.247 "is_configured": false, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 0 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": "BaseBdev4", 00:14:03.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.247 "is_configured": false, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 0 00:14:03.247 } 00:14:03.247 ] 00:14:03.247 }' 00:14:03.247 10:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.247 10:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:03.816 [2024-06-10 10:18:09.391567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.816 BaseBdev3 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:03.816 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:04.074 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:04.333 [ 00:14:04.333 { 00:14:04.333 "name": "BaseBdev3", 00:14:04.333 "aliases": [ 00:14:04.333 "bc4e426e-2712-11ef-b084-113036b5c18d" 00:14:04.333 ], 00:14:04.333 "product_name": "Malloc disk", 00:14:04.333 "block_size": 512, 00:14:04.333 "num_blocks": 65536, 00:14:04.333 "uuid": "bc4e426e-2712-11ef-b084-113036b5c18d", 00:14:04.333 "assigned_rate_limits": { 00:14:04.333 "rw_ios_per_sec": 0, 00:14:04.333 "rw_mbytes_per_sec": 0, 00:14:04.333 "r_mbytes_per_sec": 0, 00:14:04.333 "w_mbytes_per_sec": 0 00:14:04.333 }, 00:14:04.333 "claimed": true, 00:14:04.333 "claim_type": "exclusive_write", 00:14:04.333 "zoned": false, 00:14:04.333 "supported_io_types": { 00:14:04.333 "read": true, 00:14:04.333 "write": true, 00:14:04.333 "unmap": true, 00:14:04.333 "write_zeroes": true, 00:14:04.333 "flush": true, 00:14:04.333 "reset": true, 00:14:04.333 "compare": false, 00:14:04.333 "compare_and_write": false, 00:14:04.333 "abort": true, 00:14:04.333 "nvme_admin": false, 00:14:04.333 "nvme_io": false 00:14:04.333 }, 00:14:04.333 "memory_domains": [ 00:14:04.333 { 00:14:04.333 "dma_device_id": "system", 00:14:04.333 "dma_device_type": 1 00:14:04.333 }, 00:14:04.333 { 00:14:04.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.333 "dma_device_type": 2 00:14:04.333 } 00:14:04.333 ], 00:14:04.333 "driver_specific": {} 00:14:04.333 } 00:14:04.333 ] 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.333 10:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.592 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.592 "name": "Existed_Raid", 00:14:04.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.592 "strip_size_kb": 64, 00:14:04.592 "state": "configuring", 00:14:04.592 "raid_level": "concat", 00:14:04.592 "superblock": false, 00:14:04.592 "num_base_bdevs": 4, 00:14:04.592 "num_base_bdevs_discovered": 3, 00:14:04.592 "num_base_bdevs_operational": 4, 00:14:04.592 "base_bdevs_list": [ 00:14:04.592 { 00:14:04.592 "name": "BaseBdev1", 00:14:04.592 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:04.592 "is_configured": true, 00:14:04.592 "data_offset": 0, 00:14:04.592 "data_size": 65536 00:14:04.592 }, 00:14:04.592 { 00:14:04.592 "name": "BaseBdev2", 00:14:04.592 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:04.592 "is_configured": true, 00:14:04.592 "data_offset": 0, 00:14:04.592 "data_size": 65536 00:14:04.592 }, 00:14:04.592 { 00:14:04.592 "name": "BaseBdev3", 00:14:04.592 "uuid": "bc4e426e-2712-11ef-b084-113036b5c18d", 00:14:04.592 "is_configured": true, 00:14:04.592 "data_offset": 0, 00:14:04.592 "data_size": 65536 00:14:04.592 }, 00:14:04.592 { 00:14:04.592 "name": "BaseBdev4", 00:14:04.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.592 "is_configured": false, 00:14:04.592 "data_offset": 0, 00:14:04.592 "data_size": 0 00:14:04.592 } 00:14:04.592 ] 00:14:04.592 }' 00:14:04.592 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.592 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.851 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:05.110 [2024-06-10 10:18:10.663607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.110 [2024-06-10 10:18:10.663633] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5a8a00 00:14:05.110 [2024-06-10 10:18:10.663637] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:05.110 [2024-06-10 10:18:10.663663] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b60bec0 00:14:05.110 [2024-06-10 10:18:10.663745] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5a8a00 00:14:05.110 [2024-06-10 10:18:10.663749] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5a8a00 00:14:05.110 [2024-06-10 10:18:10.663776] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.110 BaseBdev4 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:05.110 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.369 10:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:05.628 [ 00:14:05.628 { 00:14:05.628 "name": "BaseBdev4", 00:14:05.628 "aliases": [ 00:14:05.628 "bd105b75-2712-11ef-b084-113036b5c18d" 00:14:05.628 ], 00:14:05.628 "product_name": "Malloc disk", 00:14:05.628 "block_size": 512, 00:14:05.628 "num_blocks": 65536, 00:14:05.628 "uuid": "bd105b75-2712-11ef-b084-113036b5c18d", 00:14:05.628 "assigned_rate_limits": { 00:14:05.628 "rw_ios_per_sec": 0, 00:14:05.628 "rw_mbytes_per_sec": 0, 00:14:05.628 "r_mbytes_per_sec": 0, 00:14:05.628 "w_mbytes_per_sec": 0 00:14:05.628 }, 00:14:05.628 "claimed": true, 00:14:05.628 "claim_type": "exclusive_write", 00:14:05.628 "zoned": false, 00:14:05.628 "supported_io_types": { 00:14:05.628 "read": true, 00:14:05.628 "write": true, 00:14:05.628 "unmap": true, 00:14:05.628 "write_zeroes": true, 00:14:05.628 "flush": true, 00:14:05.628 "reset": true, 00:14:05.628 "compare": false, 00:14:05.628 "compare_and_write": false, 00:14:05.628 "abort": true, 00:14:05.628 "nvme_admin": false, 00:14:05.628 "nvme_io": false 00:14:05.628 }, 00:14:05.628 "memory_domains": [ 00:14:05.628 { 00:14:05.628 "dma_device_id": "system", 00:14:05.628 "dma_device_type": 1 00:14:05.628 }, 00:14:05.628 { 00:14:05.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.628 "dma_device_type": 2 00:14:05.628 } 00:14:05.628 ], 00:14:05.628 "driver_specific": {} 00:14:05.628 } 00:14:05.628 ] 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.628 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.195 "name": "Existed_Raid", 00:14:06.195 "uuid": "bd1060cd-2712-11ef-b084-113036b5c18d", 00:14:06.195 "strip_size_kb": 64, 00:14:06.195 "state": "online", 00:14:06.195 "raid_level": "concat", 00:14:06.195 "superblock": false, 00:14:06.195 "num_base_bdevs": 4, 00:14:06.195 "num_base_bdevs_discovered": 4, 00:14:06.195 "num_base_bdevs_operational": 4, 00:14:06.195 "base_bdevs_list": [ 00:14:06.195 { 00:14:06.195 "name": "BaseBdev1", 00:14:06.195 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:06.195 "is_configured": true, 00:14:06.195 "data_offset": 0, 00:14:06.195 "data_size": 65536 00:14:06.195 }, 00:14:06.195 { 00:14:06.195 "name": "BaseBdev2", 00:14:06.195 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:06.195 "is_configured": true, 00:14:06.195 "data_offset": 0, 00:14:06.195 "data_size": 65536 00:14:06.195 }, 00:14:06.195 { 00:14:06.195 "name": "BaseBdev3", 00:14:06.195 "uuid": "bc4e426e-2712-11ef-b084-113036b5c18d", 00:14:06.195 "is_configured": true, 00:14:06.195 "data_offset": 0, 00:14:06.195 "data_size": 65536 00:14:06.195 }, 00:14:06.195 { 00:14:06.195 "name": "BaseBdev4", 00:14:06.195 "uuid": "bd105b75-2712-11ef-b084-113036b5c18d", 00:14:06.195 "is_configured": true, 00:14:06.195 "data_offset": 0, 00:14:06.195 "data_size": 65536 00:14:06.195 } 00:14:06.195 ] 00:14:06.195 }' 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:06.195 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:06.453 [2024-06-10 10:18:12.035598] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.453 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:06.453 "name": "Existed_Raid", 00:14:06.453 "aliases": [ 00:14:06.453 "bd1060cd-2712-11ef-b084-113036b5c18d" 00:14:06.453 ], 00:14:06.453 "product_name": "Raid Volume", 00:14:06.453 "block_size": 512, 00:14:06.453 "num_blocks": 262144, 00:14:06.453 "uuid": "bd1060cd-2712-11ef-b084-113036b5c18d", 00:14:06.453 "assigned_rate_limits": { 00:14:06.453 "rw_ios_per_sec": 0, 00:14:06.453 "rw_mbytes_per_sec": 0, 00:14:06.453 "r_mbytes_per_sec": 0, 00:14:06.453 "w_mbytes_per_sec": 0 00:14:06.453 }, 00:14:06.453 "claimed": false, 00:14:06.453 "zoned": false, 00:14:06.453 "supported_io_types": { 00:14:06.453 "read": true, 00:14:06.453 "write": true, 00:14:06.453 "unmap": true, 00:14:06.453 "write_zeroes": true, 00:14:06.453 "flush": true, 00:14:06.453 "reset": true, 00:14:06.453 "compare": false, 00:14:06.453 "compare_and_write": false, 00:14:06.453 "abort": false, 00:14:06.453 "nvme_admin": false, 00:14:06.453 "nvme_io": false 00:14:06.453 }, 00:14:06.453 "memory_domains": [ 00:14:06.453 { 00:14:06.453 "dma_device_id": "system", 00:14:06.453 "dma_device_type": 1 00:14:06.453 }, 00:14:06.453 { 00:14:06.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.453 "dma_device_type": 2 00:14:06.453 }, 00:14:06.453 { 00:14:06.453 "dma_device_id": "system", 00:14:06.453 "dma_device_type": 1 00:14:06.453 }, 00:14:06.453 { 00:14:06.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.453 "dma_device_type": 2 00:14:06.453 }, 00:14:06.453 { 00:14:06.453 "dma_device_id": "system", 00:14:06.454 "dma_device_type": 1 00:14:06.454 }, 00:14:06.454 { 00:14:06.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.454 "dma_device_type": 2 00:14:06.454 }, 00:14:06.454 { 00:14:06.454 "dma_device_id": "system", 00:14:06.454 "dma_device_type": 1 00:14:06.454 }, 00:14:06.454 { 00:14:06.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.454 "dma_device_type": 2 00:14:06.454 } 00:14:06.454 ], 00:14:06.454 "driver_specific": { 00:14:06.454 "raid": { 00:14:06.454 "uuid": "bd1060cd-2712-11ef-b084-113036b5c18d", 00:14:06.454 "strip_size_kb": 64, 00:14:06.454 "state": "online", 00:14:06.454 "raid_level": "concat", 00:14:06.454 "superblock": false, 00:14:06.454 "num_base_bdevs": 4, 00:14:06.454 "num_base_bdevs_discovered": 4, 00:14:06.454 "num_base_bdevs_operational": 4, 00:14:06.454 "base_bdevs_list": [ 00:14:06.454 { 00:14:06.454 "name": "BaseBdev1", 00:14:06.454 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:06.454 "is_configured": true, 00:14:06.454 "data_offset": 0, 00:14:06.454 "data_size": 65536 00:14:06.454 }, 00:14:06.454 { 00:14:06.454 "name": "BaseBdev2", 00:14:06.454 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:06.454 "is_configured": true, 00:14:06.454 "data_offset": 0, 00:14:06.454 "data_size": 65536 00:14:06.454 }, 00:14:06.454 { 00:14:06.454 "name": "BaseBdev3", 00:14:06.454 "uuid": "bc4e426e-2712-11ef-b084-113036b5c18d", 00:14:06.454 "is_configured": true, 00:14:06.454 "data_offset": 0, 00:14:06.454 "data_size": 65536 00:14:06.454 }, 00:14:06.454 { 00:14:06.454 "name": "BaseBdev4", 00:14:06.454 "uuid": "bd105b75-2712-11ef-b084-113036b5c18d", 00:14:06.454 "is_configured": true, 00:14:06.454 "data_offset": 0, 00:14:06.454 "data_size": 65536 00:14:06.454 } 00:14:06.454 ] 00:14:06.454 } 00:14:06.454 } 00:14:06.454 }' 00:14:06.454 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:06.713 BaseBdev2 00:14:06.713 BaseBdev3 00:14:06.713 BaseBdev4' 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:06.713 "name": "BaseBdev1", 00:14:06.713 "aliases": [ 00:14:06.713 "ba058318-2712-11ef-b084-113036b5c18d" 00:14:06.713 ], 00:14:06.713 "product_name": "Malloc disk", 00:14:06.713 "block_size": 512, 00:14:06.713 "num_blocks": 65536, 00:14:06.713 "uuid": "ba058318-2712-11ef-b084-113036b5c18d", 00:14:06.713 "assigned_rate_limits": { 00:14:06.713 "rw_ios_per_sec": 0, 00:14:06.713 "rw_mbytes_per_sec": 0, 00:14:06.713 "r_mbytes_per_sec": 0, 00:14:06.713 "w_mbytes_per_sec": 0 00:14:06.713 }, 00:14:06.713 "claimed": true, 00:14:06.713 "claim_type": "exclusive_write", 00:14:06.713 "zoned": false, 00:14:06.713 "supported_io_types": { 00:14:06.713 "read": true, 00:14:06.713 "write": true, 00:14:06.713 "unmap": true, 00:14:06.713 "write_zeroes": true, 00:14:06.713 "flush": true, 00:14:06.713 "reset": true, 00:14:06.713 "compare": false, 00:14:06.713 "compare_and_write": false, 00:14:06.713 "abort": true, 00:14:06.713 "nvme_admin": false, 00:14:06.713 "nvme_io": false 00:14:06.713 }, 00:14:06.713 "memory_domains": [ 00:14:06.713 { 00:14:06.713 "dma_device_id": "system", 00:14:06.713 "dma_device_type": 1 00:14:06.713 }, 00:14:06.713 { 00:14:06.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.713 "dma_device_type": 2 00:14:06.713 } 00:14:06.713 ], 00:14:06.713 "driver_specific": {} 00:14:06.713 }' 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:06.713 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:06.973 "name": "BaseBdev2", 00:14:06.973 "aliases": [ 00:14:06.973 "bb732177-2712-11ef-b084-113036b5c18d" 00:14:06.973 ], 00:14:06.973 "product_name": "Malloc disk", 00:14:06.973 "block_size": 512, 00:14:06.973 "num_blocks": 65536, 00:14:06.973 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:06.973 "assigned_rate_limits": { 00:14:06.973 "rw_ios_per_sec": 0, 00:14:06.973 "rw_mbytes_per_sec": 0, 00:14:06.973 "r_mbytes_per_sec": 0, 00:14:06.973 "w_mbytes_per_sec": 0 00:14:06.973 }, 00:14:06.973 "claimed": true, 00:14:06.973 "claim_type": "exclusive_write", 00:14:06.973 "zoned": false, 00:14:06.973 "supported_io_types": { 00:14:06.973 "read": true, 00:14:06.973 "write": true, 00:14:06.973 "unmap": true, 00:14:06.973 "write_zeroes": true, 00:14:06.973 "flush": true, 00:14:06.973 "reset": true, 00:14:06.973 "compare": false, 00:14:06.973 "compare_and_write": false, 00:14:06.973 "abort": true, 00:14:06.973 "nvme_admin": false, 00:14:06.973 "nvme_io": false 00:14:06.973 }, 00:14:06.973 "memory_domains": [ 00:14:06.973 { 00:14:06.973 "dma_device_id": "system", 00:14:06.973 "dma_device_type": 1 00:14:06.973 }, 00:14:06.973 { 00:14:06.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.973 "dma_device_type": 2 00:14:06.973 } 00:14:06.973 ], 00:14:06.973 "driver_specific": {} 00:14:06.973 }' 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:06.973 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.232 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.232 "name": "BaseBdev3", 00:14:07.232 "aliases": [ 00:14:07.232 "bc4e426e-2712-11ef-b084-113036b5c18d" 00:14:07.232 ], 00:14:07.232 "product_name": "Malloc disk", 00:14:07.232 "block_size": 512, 00:14:07.232 "num_blocks": 65536, 00:14:07.232 "uuid": "bc4e426e-2712-11ef-b084-113036b5c18d", 00:14:07.232 "assigned_rate_limits": { 00:14:07.232 "rw_ios_per_sec": 0, 00:14:07.232 "rw_mbytes_per_sec": 0, 00:14:07.232 "r_mbytes_per_sec": 0, 00:14:07.232 "w_mbytes_per_sec": 0 00:14:07.232 }, 00:14:07.232 "claimed": true, 00:14:07.232 "claim_type": "exclusive_write", 00:14:07.232 "zoned": false, 00:14:07.232 "supported_io_types": { 00:14:07.232 "read": true, 00:14:07.232 "write": true, 00:14:07.232 "unmap": true, 00:14:07.232 "write_zeroes": true, 00:14:07.232 "flush": true, 00:14:07.232 "reset": true, 00:14:07.232 "compare": false, 00:14:07.232 "compare_and_write": false, 00:14:07.232 "abort": true, 00:14:07.232 "nvme_admin": false, 00:14:07.233 "nvme_io": false 00:14:07.233 }, 00:14:07.233 "memory_domains": [ 00:14:07.233 { 00:14:07.233 "dma_device_id": "system", 00:14:07.233 "dma_device_type": 1 00:14:07.233 }, 00:14:07.233 { 00:14:07.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.233 "dma_device_type": 2 00:14:07.233 } 00:14:07.233 ], 00:14:07.233 "driver_specific": {} 00:14:07.233 }' 00:14:07.233 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:07.491 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:07.750 "name": "BaseBdev4", 00:14:07.750 "aliases": [ 00:14:07.750 "bd105b75-2712-11ef-b084-113036b5c18d" 00:14:07.750 ], 00:14:07.750 "product_name": "Malloc disk", 00:14:07.750 "block_size": 512, 00:14:07.750 "num_blocks": 65536, 00:14:07.750 "uuid": "bd105b75-2712-11ef-b084-113036b5c18d", 00:14:07.750 "assigned_rate_limits": { 00:14:07.750 "rw_ios_per_sec": 0, 00:14:07.750 "rw_mbytes_per_sec": 0, 00:14:07.750 "r_mbytes_per_sec": 0, 00:14:07.750 "w_mbytes_per_sec": 0 00:14:07.750 }, 00:14:07.750 "claimed": true, 00:14:07.750 "claim_type": "exclusive_write", 00:14:07.750 "zoned": false, 00:14:07.750 "supported_io_types": { 00:14:07.750 "read": true, 00:14:07.750 "write": true, 00:14:07.750 "unmap": true, 00:14:07.750 "write_zeroes": true, 00:14:07.750 "flush": true, 00:14:07.750 "reset": true, 00:14:07.750 "compare": false, 00:14:07.750 "compare_and_write": false, 00:14:07.750 "abort": true, 00:14:07.750 "nvme_admin": false, 00:14:07.750 "nvme_io": false 00:14:07.750 }, 00:14:07.750 "memory_domains": [ 00:14:07.750 { 00:14:07.750 "dma_device_id": "system", 00:14:07.750 "dma_device_type": 1 00:14:07.750 }, 00:14:07.750 { 00:14:07.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.750 "dma_device_type": 2 00:14:07.750 } 00:14:07.750 ], 00:14:07.750 "driver_specific": {} 00:14:07.750 }' 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:07.750 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:08.009 [2024-06-10 10:18:13.451640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:08.009 [2024-06-10 10:18:13.451662] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:08.009 [2024-06-10 10:18:13.451674] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.009 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.268 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.268 "name": "Existed_Raid", 00:14:08.268 "uuid": "bd1060cd-2712-11ef-b084-113036b5c18d", 00:14:08.268 "strip_size_kb": 64, 00:14:08.268 "state": "offline", 00:14:08.268 "raid_level": "concat", 00:14:08.268 "superblock": false, 00:14:08.268 "num_base_bdevs": 4, 00:14:08.268 "num_base_bdevs_discovered": 3, 00:14:08.268 "num_base_bdevs_operational": 3, 00:14:08.268 "base_bdevs_list": [ 00:14:08.268 { 00:14:08.268 "name": null, 00:14:08.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.268 "is_configured": false, 00:14:08.268 "data_offset": 0, 00:14:08.268 "data_size": 65536 00:14:08.268 }, 00:14:08.268 { 00:14:08.268 "name": "BaseBdev2", 00:14:08.268 "uuid": "bb732177-2712-11ef-b084-113036b5c18d", 00:14:08.268 "is_configured": true, 00:14:08.268 "data_offset": 0, 00:14:08.268 "data_size": 65536 00:14:08.268 }, 00:14:08.268 { 00:14:08.268 "name": "BaseBdev3", 00:14:08.268 "uuid": "bc4e426e-2712-11ef-b084-113036b5c18d", 00:14:08.268 "is_configured": true, 00:14:08.268 "data_offset": 0, 00:14:08.268 "data_size": 65536 00:14:08.268 }, 00:14:08.268 { 00:14:08.268 "name": "BaseBdev4", 00:14:08.268 "uuid": "bd105b75-2712-11ef-b084-113036b5c18d", 00:14:08.268 "is_configured": true, 00:14:08.268 "data_offset": 0, 00:14:08.268 "data_size": 65536 00:14:08.268 } 00:14:08.268 ] 00:14:08.268 }' 00:14:08.268 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.268 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.530 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:08.530 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:08.530 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.530 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:08.789 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:08.789 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:08.789 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:09.048 [2024-06-10 10:18:14.528512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.048 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:09.048 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:09.048 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.048 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:09.306 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:09.306 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.306 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:09.563 [2024-06-10 10:18:15.005258] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:09.563 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:09.563 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:09.563 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.563 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:09.820 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:09.820 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:09.820 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:10.077 [2024-06-10 10:18:15.498016] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:10.077 [2024-06-10 10:18:15.498052] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5a8a00 name Existed_Raid, state offline 00:14:10.077 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:10.077 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:10.077 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.077 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:10.334 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:10.334 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:10.334 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:10.334 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:10.334 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:10.335 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.592 BaseBdev2 00:14:10.592 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:10.592 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:10.593 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:10.593 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:10.593 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:10.593 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:10.593 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.851 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.108 [ 00:14:11.108 { 00:14:11.108 "name": "BaseBdev2", 00:14:11.108 "aliases": [ 00:14:11.108 "c0385d5c-2712-11ef-b084-113036b5c18d" 00:14:11.108 ], 00:14:11.108 "product_name": "Malloc disk", 00:14:11.108 "block_size": 512, 00:14:11.108 "num_blocks": 65536, 00:14:11.108 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:11.108 "assigned_rate_limits": { 00:14:11.108 "rw_ios_per_sec": 0, 00:14:11.108 "rw_mbytes_per_sec": 0, 00:14:11.108 "r_mbytes_per_sec": 0, 00:14:11.108 "w_mbytes_per_sec": 0 00:14:11.108 }, 00:14:11.108 "claimed": false, 00:14:11.108 "zoned": false, 00:14:11.108 "supported_io_types": { 00:14:11.108 "read": true, 00:14:11.108 "write": true, 00:14:11.108 "unmap": true, 00:14:11.108 "write_zeroes": true, 00:14:11.108 "flush": true, 00:14:11.108 "reset": true, 00:14:11.108 "compare": false, 00:14:11.108 "compare_and_write": false, 00:14:11.108 "abort": true, 00:14:11.108 "nvme_admin": false, 00:14:11.108 "nvme_io": false 00:14:11.108 }, 00:14:11.108 "memory_domains": [ 00:14:11.108 { 00:14:11.108 "dma_device_id": "system", 00:14:11.108 "dma_device_type": 1 00:14:11.108 }, 00:14:11.108 { 00:14:11.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.108 "dma_device_type": 2 00:14:11.108 } 00:14:11.108 ], 00:14:11.108 "driver_specific": {} 00:14:11.108 } 00:14:11.108 ] 00:14:11.109 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:11.109 10:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:11.109 10:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:11.109 10:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:11.366 BaseBdev3 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:11.366 10:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.625 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:11.883 [ 00:14:11.883 { 00:14:11.883 "name": "BaseBdev3", 00:14:11.883 "aliases": [ 00:14:11.883 "c0acf25a-2712-11ef-b084-113036b5c18d" 00:14:11.883 ], 00:14:11.883 "product_name": "Malloc disk", 00:14:11.883 "block_size": 512, 00:14:11.883 "num_blocks": 65536, 00:14:11.883 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:11.883 "assigned_rate_limits": { 00:14:11.883 "rw_ios_per_sec": 0, 00:14:11.883 "rw_mbytes_per_sec": 0, 00:14:11.883 "r_mbytes_per_sec": 0, 00:14:11.883 "w_mbytes_per_sec": 0 00:14:11.883 }, 00:14:11.883 "claimed": false, 00:14:11.883 "zoned": false, 00:14:11.883 "supported_io_types": { 00:14:11.883 "read": true, 00:14:11.883 "write": true, 00:14:11.883 "unmap": true, 00:14:11.883 "write_zeroes": true, 00:14:11.883 "flush": true, 00:14:11.883 "reset": true, 00:14:11.883 "compare": false, 00:14:11.883 "compare_and_write": false, 00:14:11.883 "abort": true, 00:14:11.883 "nvme_admin": false, 00:14:11.883 "nvme_io": false 00:14:11.883 }, 00:14:11.883 "memory_domains": [ 00:14:11.883 { 00:14:11.883 "dma_device_id": "system", 00:14:11.883 "dma_device_type": 1 00:14:11.883 }, 00:14:11.883 { 00:14:11.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.883 "dma_device_type": 2 00:14:11.883 } 00:14:11.883 ], 00:14:11.883 "driver_specific": {} 00:14:11.883 } 00:14:11.883 ] 00:14:11.883 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:11.883 10:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:11.883 10:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:11.883 10:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:12.141 BaseBdev4 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:12.141 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.400 10:18:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:12.658 [ 00:14:12.658 { 00:14:12.658 "name": "BaseBdev4", 00:14:12.658 "aliases": [ 00:14:12.658 "c128d9b4-2712-11ef-b084-113036b5c18d" 00:14:12.658 ], 00:14:12.658 "product_name": "Malloc disk", 00:14:12.658 "block_size": 512, 00:14:12.658 "num_blocks": 65536, 00:14:12.658 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:12.658 "assigned_rate_limits": { 00:14:12.658 "rw_ios_per_sec": 0, 00:14:12.658 "rw_mbytes_per_sec": 0, 00:14:12.658 "r_mbytes_per_sec": 0, 00:14:12.658 "w_mbytes_per_sec": 0 00:14:12.658 }, 00:14:12.658 "claimed": false, 00:14:12.658 "zoned": false, 00:14:12.658 "supported_io_types": { 00:14:12.658 "read": true, 00:14:12.658 "write": true, 00:14:12.658 "unmap": true, 00:14:12.658 "write_zeroes": true, 00:14:12.658 "flush": true, 00:14:12.658 "reset": true, 00:14:12.658 "compare": false, 00:14:12.658 "compare_and_write": false, 00:14:12.658 "abort": true, 00:14:12.658 "nvme_admin": false, 00:14:12.658 "nvme_io": false 00:14:12.658 }, 00:14:12.658 "memory_domains": [ 00:14:12.658 { 00:14:12.658 "dma_device_id": "system", 00:14:12.658 "dma_device_type": 1 00:14:12.658 }, 00:14:12.658 { 00:14:12.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.658 "dma_device_type": 2 00:14:12.659 } 00:14:12.659 ], 00:14:12.659 "driver_specific": {} 00:14:12.659 } 00:14:12.659 ] 00:14:12.659 10:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:12.659 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:12.659 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:12.659 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:12.917 [2024-06-10 10:18:18.290954] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.917 [2024-06-10 10:18:18.291016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.917 [2024-06-10 10:18:18.291023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.917 [2024-06-10 10:18:18.291446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.917 [2024-06-10 10:18:18.291462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.917 "name": "Existed_Raid", 00:14:12.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.917 "strip_size_kb": 64, 00:14:12.917 "state": "configuring", 00:14:12.917 "raid_level": "concat", 00:14:12.917 "superblock": false, 00:14:12.917 "num_base_bdevs": 4, 00:14:12.917 "num_base_bdevs_discovered": 3, 00:14:12.917 "num_base_bdevs_operational": 4, 00:14:12.917 "base_bdevs_list": [ 00:14:12.917 { 00:14:12.917 "name": "BaseBdev1", 00:14:12.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.917 "is_configured": false, 00:14:12.917 "data_offset": 0, 00:14:12.917 "data_size": 0 00:14:12.917 }, 00:14:12.917 { 00:14:12.917 "name": "BaseBdev2", 00:14:12.917 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:12.917 "is_configured": true, 00:14:12.917 "data_offset": 0, 00:14:12.917 "data_size": 65536 00:14:12.917 }, 00:14:12.917 { 00:14:12.917 "name": "BaseBdev3", 00:14:12.917 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:12.917 "is_configured": true, 00:14:12.917 "data_offset": 0, 00:14:12.917 "data_size": 65536 00:14:12.917 }, 00:14:12.917 { 00:14:12.917 "name": "BaseBdev4", 00:14:12.917 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:12.917 "is_configured": true, 00:14:12.917 "data_offset": 0, 00:14:12.917 "data_size": 65536 00:14:12.917 } 00:14:12.917 ] 00:14:12.917 }' 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.917 10:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.494 10:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:13.752 [2024-06-10 10:18:19.106997] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.752 "name": "Existed_Raid", 00:14:13.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.752 "strip_size_kb": 64, 00:14:13.752 "state": "configuring", 00:14:13.752 "raid_level": "concat", 00:14:13.752 "superblock": false, 00:14:13.752 "num_base_bdevs": 4, 00:14:13.752 "num_base_bdevs_discovered": 2, 00:14:13.752 "num_base_bdevs_operational": 4, 00:14:13.752 "base_bdevs_list": [ 00:14:13.752 { 00:14:13.752 "name": "BaseBdev1", 00:14:13.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.752 "is_configured": false, 00:14:13.752 "data_offset": 0, 00:14:13.752 "data_size": 0 00:14:13.752 }, 00:14:13.752 { 00:14:13.752 "name": null, 00:14:13.752 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:13.752 "is_configured": false, 00:14:13.752 "data_offset": 0, 00:14:13.752 "data_size": 65536 00:14:13.752 }, 00:14:13.752 { 00:14:13.752 "name": "BaseBdev3", 00:14:13.752 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:13.752 "is_configured": true, 00:14:13.752 "data_offset": 0, 00:14:13.752 "data_size": 65536 00:14:13.752 }, 00:14:13.752 { 00:14:13.752 "name": "BaseBdev4", 00:14:13.752 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:13.752 "is_configured": true, 00:14:13.752 "data_offset": 0, 00:14:13.752 "data_size": 65536 00:14:13.752 } 00:14:13.752 ] 00:14:13.752 }' 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.752 10:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.320 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.320 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.320 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:14.320 10:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:14.578 [2024-06-10 10:18:20.115151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.578 BaseBdev1 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:14.578 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.836 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.096 [ 00:14:15.096 { 00:14:15.096 "name": "BaseBdev1", 00:14:15.096 "aliases": [ 00:14:15.096 "c2b28bae-2712-11ef-b084-113036b5c18d" 00:14:15.096 ], 00:14:15.096 "product_name": "Malloc disk", 00:14:15.096 "block_size": 512, 00:14:15.096 "num_blocks": 65536, 00:14:15.096 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:15.096 "assigned_rate_limits": { 00:14:15.096 "rw_ios_per_sec": 0, 00:14:15.096 "rw_mbytes_per_sec": 0, 00:14:15.096 "r_mbytes_per_sec": 0, 00:14:15.096 "w_mbytes_per_sec": 0 00:14:15.096 }, 00:14:15.096 "claimed": true, 00:14:15.096 "claim_type": "exclusive_write", 00:14:15.096 "zoned": false, 00:14:15.096 "supported_io_types": { 00:14:15.096 "read": true, 00:14:15.096 "write": true, 00:14:15.096 "unmap": true, 00:14:15.096 "write_zeroes": true, 00:14:15.096 "flush": true, 00:14:15.096 "reset": true, 00:14:15.096 "compare": false, 00:14:15.096 "compare_and_write": false, 00:14:15.096 "abort": true, 00:14:15.096 "nvme_admin": false, 00:14:15.096 "nvme_io": false 00:14:15.096 }, 00:14:15.096 "memory_domains": [ 00:14:15.096 { 00:14:15.096 "dma_device_id": "system", 00:14:15.096 "dma_device_type": 1 00:14:15.096 }, 00:14:15.096 { 00:14:15.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.096 "dma_device_type": 2 00:14:15.096 } 00:14:15.096 ], 00:14:15.096 "driver_specific": {} 00:14:15.096 } 00:14:15.096 ] 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.096 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.355 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.355 "name": "Existed_Raid", 00:14:15.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.355 "strip_size_kb": 64, 00:14:15.355 "state": "configuring", 00:14:15.355 "raid_level": "concat", 00:14:15.355 "superblock": false, 00:14:15.355 "num_base_bdevs": 4, 00:14:15.355 "num_base_bdevs_discovered": 3, 00:14:15.355 "num_base_bdevs_operational": 4, 00:14:15.355 "base_bdevs_list": [ 00:14:15.355 { 00:14:15.355 "name": "BaseBdev1", 00:14:15.355 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:15.355 "is_configured": true, 00:14:15.355 "data_offset": 0, 00:14:15.355 "data_size": 65536 00:14:15.355 }, 00:14:15.356 { 00:14:15.356 "name": null, 00:14:15.356 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:15.356 "is_configured": false, 00:14:15.356 "data_offset": 0, 00:14:15.356 "data_size": 65536 00:14:15.356 }, 00:14:15.356 { 00:14:15.356 "name": "BaseBdev3", 00:14:15.356 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:15.356 "is_configured": true, 00:14:15.356 "data_offset": 0, 00:14:15.356 "data_size": 65536 00:14:15.356 }, 00:14:15.356 { 00:14:15.356 "name": "BaseBdev4", 00:14:15.356 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:15.356 "is_configured": true, 00:14:15.356 "data_offset": 0, 00:14:15.356 "data_size": 65536 00:14:15.356 } 00:14:15.356 ] 00:14:15.356 }' 00:14:15.356 10:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.356 10:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.951 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:15.951 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.951 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:15.951 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:16.209 [2024-06-10 10:18:21.635182] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.209 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.468 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:16.468 "name": "Existed_Raid", 00:14:16.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.468 "strip_size_kb": 64, 00:14:16.468 "state": "configuring", 00:14:16.468 "raid_level": "concat", 00:14:16.468 "superblock": false, 00:14:16.468 "num_base_bdevs": 4, 00:14:16.468 "num_base_bdevs_discovered": 2, 00:14:16.468 "num_base_bdevs_operational": 4, 00:14:16.468 "base_bdevs_list": [ 00:14:16.468 { 00:14:16.468 "name": "BaseBdev1", 00:14:16.468 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:16.468 "is_configured": true, 00:14:16.468 "data_offset": 0, 00:14:16.468 "data_size": 65536 00:14:16.468 }, 00:14:16.468 { 00:14:16.468 "name": null, 00:14:16.468 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:16.468 "is_configured": false, 00:14:16.468 "data_offset": 0, 00:14:16.468 "data_size": 65536 00:14:16.468 }, 00:14:16.468 { 00:14:16.468 "name": null, 00:14:16.468 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:16.468 "is_configured": false, 00:14:16.468 "data_offset": 0, 00:14:16.468 "data_size": 65536 00:14:16.468 }, 00:14:16.468 { 00:14:16.468 "name": "BaseBdev4", 00:14:16.468 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:16.468 "is_configured": true, 00:14:16.468 "data_offset": 0, 00:14:16.468 "data_size": 65536 00:14:16.468 } 00:14:16.468 ] 00:14:16.468 }' 00:14:16.468 10:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:16.468 10:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.725 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.725 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:16.983 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:16.983 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:17.242 [2024-06-10 10:18:22.811275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.242 10:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.809 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.809 "name": "Existed_Raid", 00:14:17.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.809 "strip_size_kb": 64, 00:14:17.809 "state": "configuring", 00:14:17.809 "raid_level": "concat", 00:14:17.809 "superblock": false, 00:14:17.809 "num_base_bdevs": 4, 00:14:17.809 "num_base_bdevs_discovered": 3, 00:14:17.809 "num_base_bdevs_operational": 4, 00:14:17.809 "base_bdevs_list": [ 00:14:17.809 { 00:14:17.809 "name": "BaseBdev1", 00:14:17.809 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:17.809 "is_configured": true, 00:14:17.809 "data_offset": 0, 00:14:17.809 "data_size": 65536 00:14:17.809 }, 00:14:17.809 { 00:14:17.809 "name": null, 00:14:17.809 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:17.809 "is_configured": false, 00:14:17.809 "data_offset": 0, 00:14:17.809 "data_size": 65536 00:14:17.809 }, 00:14:17.809 { 00:14:17.809 "name": "BaseBdev3", 00:14:17.809 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:17.809 "is_configured": true, 00:14:17.809 "data_offset": 0, 00:14:17.809 "data_size": 65536 00:14:17.809 }, 00:14:17.809 { 00:14:17.809 "name": "BaseBdev4", 00:14:17.809 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:17.809 "is_configured": true, 00:14:17.809 "data_offset": 0, 00:14:17.809 "data_size": 65536 00:14:17.809 } 00:14:17.809 ] 00:14:17.809 }' 00:14:17.809 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.809 10:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.127 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:18.127 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.391 [2024-06-10 10:18:23.967357] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.391 10:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.961 10:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.961 "name": "Existed_Raid", 00:14:18.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.961 "strip_size_kb": 64, 00:14:18.961 "state": "configuring", 00:14:18.961 "raid_level": "concat", 00:14:18.961 "superblock": false, 00:14:18.961 "num_base_bdevs": 4, 00:14:18.961 "num_base_bdevs_discovered": 2, 00:14:18.961 "num_base_bdevs_operational": 4, 00:14:18.961 "base_bdevs_list": [ 00:14:18.961 { 00:14:18.961 "name": null, 00:14:18.961 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:18.961 "is_configured": false, 00:14:18.961 "data_offset": 0, 00:14:18.961 "data_size": 65536 00:14:18.961 }, 00:14:18.961 { 00:14:18.961 "name": null, 00:14:18.961 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:18.961 "is_configured": false, 00:14:18.961 "data_offset": 0, 00:14:18.961 "data_size": 65536 00:14:18.961 }, 00:14:18.961 { 00:14:18.961 "name": "BaseBdev3", 00:14:18.961 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:18.961 "is_configured": true, 00:14:18.961 "data_offset": 0, 00:14:18.961 "data_size": 65536 00:14:18.961 }, 00:14:18.961 { 00:14:18.961 "name": "BaseBdev4", 00:14:18.961 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:18.961 "is_configured": true, 00:14:18.961 "data_offset": 0, 00:14:18.962 "data_size": 65536 00:14:18.962 } 00:14:18.962 ] 00:14:18.962 }' 00:14:18.962 10:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.962 10:18:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.221 10:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.221 10:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:19.479 10:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:19.479 10:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:19.737 [2024-06-10 10:18:25.112166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.737 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.996 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.996 "name": "Existed_Raid", 00:14:19.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.996 "strip_size_kb": 64, 00:14:19.996 "state": "configuring", 00:14:19.996 "raid_level": "concat", 00:14:19.996 "superblock": false, 00:14:19.996 "num_base_bdevs": 4, 00:14:19.996 "num_base_bdevs_discovered": 3, 00:14:19.996 "num_base_bdevs_operational": 4, 00:14:19.996 "base_bdevs_list": [ 00:14:19.996 { 00:14:19.996 "name": null, 00:14:19.996 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:19.996 "is_configured": false, 00:14:19.996 "data_offset": 0, 00:14:19.996 "data_size": 65536 00:14:19.996 }, 00:14:19.996 { 00:14:19.996 "name": "BaseBdev2", 00:14:19.996 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:19.996 "is_configured": true, 00:14:19.996 "data_offset": 0, 00:14:19.996 "data_size": 65536 00:14:19.996 }, 00:14:19.996 { 00:14:19.996 "name": "BaseBdev3", 00:14:19.996 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:19.996 "is_configured": true, 00:14:19.996 "data_offset": 0, 00:14:19.996 "data_size": 65536 00:14:19.996 }, 00:14:19.996 { 00:14:19.996 "name": "BaseBdev4", 00:14:19.996 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:19.996 "is_configured": true, 00:14:19.996 "data_offset": 0, 00:14:19.996 "data_size": 65536 00:14:19.996 } 00:14:19.996 ] 00:14:19.996 }' 00:14:19.996 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.996 10:18:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.254 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.254 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:20.512 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:20.512 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.512 10:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:20.770 10:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c2b28bae-2712-11ef-b084-113036b5c18d 00:14:21.030 [2024-06-10 10:18:26.412383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:21.030 [2024-06-10 10:18:26.412408] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5a8f00 00:14:21.030 [2024-06-10 10:18:26.412412] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:21.030 [2024-06-10 10:18:26.412449] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b60be20 00:14:21.030 [2024-06-10 10:18:26.412506] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5a8f00 00:14:21.030 [2024-06-10 10:18:26.412520] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5a8f00 00:14:21.030 [2024-06-10 10:18:26.412548] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.030 NewBaseBdev 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:21.030 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.290 10:18:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:21.549 [ 00:14:21.549 { 00:14:21.549 "name": "NewBaseBdev", 00:14:21.549 "aliases": [ 00:14:21.549 "c2b28bae-2712-11ef-b084-113036b5c18d" 00:14:21.549 ], 00:14:21.549 "product_name": "Malloc disk", 00:14:21.549 "block_size": 512, 00:14:21.549 "num_blocks": 65536, 00:14:21.549 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:21.549 "assigned_rate_limits": { 00:14:21.549 "rw_ios_per_sec": 0, 00:14:21.549 "rw_mbytes_per_sec": 0, 00:14:21.549 "r_mbytes_per_sec": 0, 00:14:21.549 "w_mbytes_per_sec": 0 00:14:21.549 }, 00:14:21.549 "claimed": true, 00:14:21.549 "claim_type": "exclusive_write", 00:14:21.549 "zoned": false, 00:14:21.549 "supported_io_types": { 00:14:21.549 "read": true, 00:14:21.549 "write": true, 00:14:21.549 "unmap": true, 00:14:21.549 "write_zeroes": true, 00:14:21.549 "flush": true, 00:14:21.549 "reset": true, 00:14:21.549 "compare": false, 00:14:21.549 "compare_and_write": false, 00:14:21.549 "abort": true, 00:14:21.549 "nvme_admin": false, 00:14:21.549 "nvme_io": false 00:14:21.549 }, 00:14:21.549 "memory_domains": [ 00:14:21.549 { 00:14:21.549 "dma_device_id": "system", 00:14:21.549 "dma_device_type": 1 00:14:21.549 }, 00:14:21.549 { 00:14:21.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.549 "dma_device_type": 2 00:14:21.549 } 00:14:21.549 ], 00:14:21.549 "driver_specific": {} 00:14:21.549 } 00:14:21.549 ] 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.549 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.808 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.808 "name": "Existed_Raid", 00:14:21.808 "uuid": "c6737315-2712-11ef-b084-113036b5c18d", 00:14:21.808 "strip_size_kb": 64, 00:14:21.808 "state": "online", 00:14:21.808 "raid_level": "concat", 00:14:21.808 "superblock": false, 00:14:21.808 "num_base_bdevs": 4, 00:14:21.808 "num_base_bdevs_discovered": 4, 00:14:21.808 "num_base_bdevs_operational": 4, 00:14:21.808 "base_bdevs_list": [ 00:14:21.808 { 00:14:21.808 "name": "NewBaseBdev", 00:14:21.808 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:21.808 "is_configured": true, 00:14:21.808 "data_offset": 0, 00:14:21.808 "data_size": 65536 00:14:21.808 }, 00:14:21.808 { 00:14:21.808 "name": "BaseBdev2", 00:14:21.808 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:21.808 "is_configured": true, 00:14:21.808 "data_offset": 0, 00:14:21.808 "data_size": 65536 00:14:21.808 }, 00:14:21.808 { 00:14:21.808 "name": "BaseBdev3", 00:14:21.808 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:21.808 "is_configured": true, 00:14:21.808 "data_offset": 0, 00:14:21.808 "data_size": 65536 00:14:21.808 }, 00:14:21.808 { 00:14:21.808 "name": "BaseBdev4", 00:14:21.808 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:21.808 "is_configured": true, 00:14:21.808 "data_offset": 0, 00:14:21.808 "data_size": 65536 00:14:21.808 } 00:14:21.808 ] 00:14:21.808 }' 00:14:21.808 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.808 10:18:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:22.067 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:22.326 [2024-06-10 10:18:27.876350] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.326 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:22.326 "name": "Existed_Raid", 00:14:22.326 "aliases": [ 00:14:22.326 "c6737315-2712-11ef-b084-113036b5c18d" 00:14:22.326 ], 00:14:22.326 "product_name": "Raid Volume", 00:14:22.326 "block_size": 512, 00:14:22.326 "num_blocks": 262144, 00:14:22.327 "uuid": "c6737315-2712-11ef-b084-113036b5c18d", 00:14:22.327 "assigned_rate_limits": { 00:14:22.327 "rw_ios_per_sec": 0, 00:14:22.327 "rw_mbytes_per_sec": 0, 00:14:22.327 "r_mbytes_per_sec": 0, 00:14:22.327 "w_mbytes_per_sec": 0 00:14:22.327 }, 00:14:22.327 "claimed": false, 00:14:22.327 "zoned": false, 00:14:22.327 "supported_io_types": { 00:14:22.327 "read": true, 00:14:22.327 "write": true, 00:14:22.327 "unmap": true, 00:14:22.327 "write_zeroes": true, 00:14:22.327 "flush": true, 00:14:22.327 "reset": true, 00:14:22.327 "compare": false, 00:14:22.327 "compare_and_write": false, 00:14:22.327 "abort": false, 00:14:22.327 "nvme_admin": false, 00:14:22.327 "nvme_io": false 00:14:22.327 }, 00:14:22.327 "memory_domains": [ 00:14:22.327 { 00:14:22.327 "dma_device_id": "system", 00:14:22.327 "dma_device_type": 1 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.327 "dma_device_type": 2 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "system", 00:14:22.327 "dma_device_type": 1 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.327 "dma_device_type": 2 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "system", 00:14:22.327 "dma_device_type": 1 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.327 "dma_device_type": 2 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "system", 00:14:22.327 "dma_device_type": 1 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.327 "dma_device_type": 2 00:14:22.327 } 00:14:22.327 ], 00:14:22.327 "driver_specific": { 00:14:22.327 "raid": { 00:14:22.327 "uuid": "c6737315-2712-11ef-b084-113036b5c18d", 00:14:22.327 "strip_size_kb": 64, 00:14:22.327 "state": "online", 00:14:22.327 "raid_level": "concat", 00:14:22.327 "superblock": false, 00:14:22.327 "num_base_bdevs": 4, 00:14:22.327 "num_base_bdevs_discovered": 4, 00:14:22.327 "num_base_bdevs_operational": 4, 00:14:22.327 "base_bdevs_list": [ 00:14:22.327 { 00:14:22.327 "name": "NewBaseBdev", 00:14:22.327 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:22.327 "is_configured": true, 00:14:22.327 "data_offset": 0, 00:14:22.327 "data_size": 65536 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "name": "BaseBdev2", 00:14:22.327 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:22.327 "is_configured": true, 00:14:22.327 "data_offset": 0, 00:14:22.327 "data_size": 65536 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "name": "BaseBdev3", 00:14:22.327 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:22.327 "is_configured": true, 00:14:22.327 "data_offset": 0, 00:14:22.327 "data_size": 65536 00:14:22.327 }, 00:14:22.327 { 00:14:22.327 "name": "BaseBdev4", 00:14:22.327 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:22.327 "is_configured": true, 00:14:22.327 "data_offset": 0, 00:14:22.327 "data_size": 65536 00:14:22.327 } 00:14:22.327 ] 00:14:22.327 } 00:14:22.327 } 00:14:22.327 }' 00:14:22.327 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.327 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:22.327 BaseBdev2 00:14:22.327 BaseBdev3 00:14:22.327 BaseBdev4' 00:14:22.327 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.327 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:22.327 10:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:22.586 "name": "NewBaseBdev", 00:14:22.586 "aliases": [ 00:14:22.586 "c2b28bae-2712-11ef-b084-113036b5c18d" 00:14:22.586 ], 00:14:22.586 "product_name": "Malloc disk", 00:14:22.586 "block_size": 512, 00:14:22.586 "num_blocks": 65536, 00:14:22.586 "uuid": "c2b28bae-2712-11ef-b084-113036b5c18d", 00:14:22.586 "assigned_rate_limits": { 00:14:22.586 "rw_ios_per_sec": 0, 00:14:22.586 "rw_mbytes_per_sec": 0, 00:14:22.586 "r_mbytes_per_sec": 0, 00:14:22.586 "w_mbytes_per_sec": 0 00:14:22.586 }, 00:14:22.586 "claimed": true, 00:14:22.586 "claim_type": "exclusive_write", 00:14:22.586 "zoned": false, 00:14:22.586 "supported_io_types": { 00:14:22.586 "read": true, 00:14:22.586 "write": true, 00:14:22.586 "unmap": true, 00:14:22.586 "write_zeroes": true, 00:14:22.586 "flush": true, 00:14:22.586 "reset": true, 00:14:22.586 "compare": false, 00:14:22.586 "compare_and_write": false, 00:14:22.586 "abort": true, 00:14:22.586 "nvme_admin": false, 00:14:22.586 "nvme_io": false 00:14:22.586 }, 00:14:22.586 "memory_domains": [ 00:14:22.586 { 00:14:22.586 "dma_device_id": "system", 00:14:22.586 "dma_device_type": 1 00:14:22.586 }, 00:14:22.586 { 00:14:22.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.586 "dma_device_type": 2 00:14:22.586 } 00:14:22.586 ], 00:14:22.586 "driver_specific": {} 00:14:22.586 }' 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:22.586 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:22.845 "name": "BaseBdev2", 00:14:22.845 "aliases": [ 00:14:22.845 "c0385d5c-2712-11ef-b084-113036b5c18d" 00:14:22.845 ], 00:14:22.845 "product_name": "Malloc disk", 00:14:22.845 "block_size": 512, 00:14:22.845 "num_blocks": 65536, 00:14:22.845 "uuid": "c0385d5c-2712-11ef-b084-113036b5c18d", 00:14:22.845 "assigned_rate_limits": { 00:14:22.845 "rw_ios_per_sec": 0, 00:14:22.845 "rw_mbytes_per_sec": 0, 00:14:22.845 "r_mbytes_per_sec": 0, 00:14:22.845 "w_mbytes_per_sec": 0 00:14:22.845 }, 00:14:22.845 "claimed": true, 00:14:22.845 "claim_type": "exclusive_write", 00:14:22.845 "zoned": false, 00:14:22.845 "supported_io_types": { 00:14:22.845 "read": true, 00:14:22.845 "write": true, 00:14:22.845 "unmap": true, 00:14:22.845 "write_zeroes": true, 00:14:22.845 "flush": true, 00:14:22.845 "reset": true, 00:14:22.845 "compare": false, 00:14:22.845 "compare_and_write": false, 00:14:22.845 "abort": true, 00:14:22.845 "nvme_admin": false, 00:14:22.845 "nvme_io": false 00:14:22.845 }, 00:14:22.845 "memory_domains": [ 00:14:22.845 { 00:14:22.845 "dma_device_id": "system", 00:14:22.845 "dma_device_type": 1 00:14:22.845 }, 00:14:22.845 { 00:14:22.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.845 "dma_device_type": 2 00:14:22.845 } 00:14:22.845 ], 00:14:22.845 "driver_specific": {} 00:14:22.845 }' 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.845 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.103 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.103 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.103 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.103 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:23.103 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.362 "name": "BaseBdev3", 00:14:23.362 "aliases": [ 00:14:23.362 "c0acf25a-2712-11ef-b084-113036b5c18d" 00:14:23.362 ], 00:14:23.362 "product_name": "Malloc disk", 00:14:23.362 "block_size": 512, 00:14:23.362 "num_blocks": 65536, 00:14:23.362 "uuid": "c0acf25a-2712-11ef-b084-113036b5c18d", 00:14:23.362 "assigned_rate_limits": { 00:14:23.362 "rw_ios_per_sec": 0, 00:14:23.362 "rw_mbytes_per_sec": 0, 00:14:23.362 "r_mbytes_per_sec": 0, 00:14:23.362 "w_mbytes_per_sec": 0 00:14:23.362 }, 00:14:23.362 "claimed": true, 00:14:23.362 "claim_type": "exclusive_write", 00:14:23.362 "zoned": false, 00:14:23.362 "supported_io_types": { 00:14:23.362 "read": true, 00:14:23.362 "write": true, 00:14:23.362 "unmap": true, 00:14:23.362 "write_zeroes": true, 00:14:23.362 "flush": true, 00:14:23.362 "reset": true, 00:14:23.362 "compare": false, 00:14:23.362 "compare_and_write": false, 00:14:23.362 "abort": true, 00:14:23.362 "nvme_admin": false, 00:14:23.362 "nvme_io": false 00:14:23.362 }, 00:14:23.362 "memory_domains": [ 00:14:23.362 { 00:14:23.362 "dma_device_id": "system", 00:14:23.362 "dma_device_type": 1 00:14:23.362 }, 00:14:23.362 { 00:14:23.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.362 "dma_device_type": 2 00:14:23.362 } 00:14:23.362 ], 00:14:23.362 "driver_specific": {} 00:14:23.362 }' 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.362 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:23.363 10:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.621 "name": "BaseBdev4", 00:14:23.621 "aliases": [ 00:14:23.621 "c128d9b4-2712-11ef-b084-113036b5c18d" 00:14:23.621 ], 00:14:23.621 "product_name": "Malloc disk", 00:14:23.621 "block_size": 512, 00:14:23.621 "num_blocks": 65536, 00:14:23.621 "uuid": "c128d9b4-2712-11ef-b084-113036b5c18d", 00:14:23.621 "assigned_rate_limits": { 00:14:23.621 "rw_ios_per_sec": 0, 00:14:23.621 "rw_mbytes_per_sec": 0, 00:14:23.621 "r_mbytes_per_sec": 0, 00:14:23.621 "w_mbytes_per_sec": 0 00:14:23.621 }, 00:14:23.621 "claimed": true, 00:14:23.621 "claim_type": "exclusive_write", 00:14:23.621 "zoned": false, 00:14:23.621 "supported_io_types": { 00:14:23.621 "read": true, 00:14:23.621 "write": true, 00:14:23.621 "unmap": true, 00:14:23.621 "write_zeroes": true, 00:14:23.621 "flush": true, 00:14:23.621 "reset": true, 00:14:23.621 "compare": false, 00:14:23.621 "compare_and_write": false, 00:14:23.621 "abort": true, 00:14:23.621 "nvme_admin": false, 00:14:23.621 "nvme_io": false 00:14:23.621 }, 00:14:23.621 "memory_domains": [ 00:14:23.621 { 00:14:23.621 "dma_device_id": "system", 00:14:23.621 "dma_device_type": 1 00:14:23.621 }, 00:14:23.621 { 00:14:23.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.621 "dma_device_type": 2 00:14:23.621 } 00:14:23.621 ], 00:14:23.621 "driver_specific": {} 00:14:23.621 }' 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.621 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:23.880 [2024-06-10 10:18:29.424397] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.880 [2024-06-10 10:18:29.424418] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.880 [2024-06-10 10:18:29.424434] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.880 [2024-06-10 10:18:29.424447] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.880 [2024-06-10 10:18:29.424451] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5a8f00 name Existed_Raid, state offline 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 61481 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 61481 ']' 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 61481 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 61481 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:14:23.880 killing process with pid 61481 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 61481' 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 61481 00:14:23.880 [2024-06-10 10:18:29.452667] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.880 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 61481 00:14:23.880 [2024-06-10 10:18:29.471647] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:24.139 00:14:24.139 real 0m26.732s 00:14:24.139 user 0m49.018s 00:14:24.139 sys 0m3.691s 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:24.139 ************************************ 00:14:24.139 END TEST raid_state_function_test 00:14:24.139 ************************************ 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.139 10:18:29 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:24.139 10:18:29 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:14:24.139 10:18:29 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:24.139 10:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:24.139 ************************************ 00:14:24.139 START TEST raid_state_function_test_sb 00:14:24.139 ************************************ 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test concat 4 true 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:24.139 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=62296 00:14:24.140 Process raid pid: 62296 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62296' 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 62296 /var/tmp/spdk-raid.sock 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 62296 ']' 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:24.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:24.140 10:18:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.140 [2024-06-10 10:18:29.695434] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:14:24.140 [2024-06-10 10:18:29.695639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:24.705 EAL: TSC is not safe to use in SMP mode 00:14:24.705 EAL: TSC is not invariant 00:14:24.705 [2024-06-10 10:18:30.151173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.705 [2024-06-10 10:18:30.244807] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:24.705 [2024-06-10 10:18:30.247783] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.705 [2024-06-10 10:18:30.248736] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.705 [2024-06-10 10:18:30.248749] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.291 10:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:25.291 10:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:14:25.291 10:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:25.560 [2024-06-10 10:18:31.012825] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.560 [2024-06-10 10:18:31.012873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.560 [2024-06-10 10:18:31.012878] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.560 [2024-06-10 10:18:31.012886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.560 [2024-06-10 10:18:31.012896] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:25.560 [2024-06-10 10:18:31.012920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:25.560 [2024-06-10 10:18:31.012923] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:25.560 [2024-06-10 10:18:31.012930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.560 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.817 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.817 "name": "Existed_Raid", 00:14:25.817 "uuid": "c9316a40-2712-11ef-b084-113036b5c18d", 00:14:25.817 "strip_size_kb": 64, 00:14:25.817 "state": "configuring", 00:14:25.817 "raid_level": "concat", 00:14:25.817 "superblock": true, 00:14:25.817 "num_base_bdevs": 4, 00:14:25.817 "num_base_bdevs_discovered": 0, 00:14:25.817 "num_base_bdevs_operational": 4, 00:14:25.817 "base_bdevs_list": [ 00:14:25.817 { 00:14:25.817 "name": "BaseBdev1", 00:14:25.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.817 "is_configured": false, 00:14:25.817 "data_offset": 0, 00:14:25.817 "data_size": 0 00:14:25.817 }, 00:14:25.817 { 00:14:25.817 "name": "BaseBdev2", 00:14:25.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.817 "is_configured": false, 00:14:25.817 "data_offset": 0, 00:14:25.817 "data_size": 0 00:14:25.817 }, 00:14:25.817 { 00:14:25.817 "name": "BaseBdev3", 00:14:25.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.817 "is_configured": false, 00:14:25.817 "data_offset": 0, 00:14:25.817 "data_size": 0 00:14:25.817 }, 00:14:25.817 { 00:14:25.817 "name": "BaseBdev4", 00:14:25.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.817 "is_configured": false, 00:14:25.817 "data_offset": 0, 00:14:25.817 "data_size": 0 00:14:25.817 } 00:14:25.817 ] 00:14:25.817 }' 00:14:25.818 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.818 10:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.075 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.335 [2024-06-10 10:18:31.928868] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.335 [2024-06-10 10:18:31.928891] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8bd500 name Existed_Raid, state configuring 00:14:26.594 10:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:26.853 [2024-06-10 10:18:32.200877] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.853 [2024-06-10 10:18:32.200934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.853 [2024-06-10 10:18:32.200938] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.853 [2024-06-10 10:18:32.200945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.853 [2024-06-10 10:18:32.200948] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.853 [2024-06-10 10:18:32.200955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.853 [2024-06-10 10:18:32.200958] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:26.853 [2024-06-10 10:18:32.200964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:26.853 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.112 [2024-06-10 10:18:32.497731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.112 BaseBdev1 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:27.112 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:27.371 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.656 [ 00:14:27.656 { 00:14:27.656 "name": "BaseBdev1", 00:14:27.656 "aliases": [ 00:14:27.656 "ca13ddb8-2712-11ef-b084-113036b5c18d" 00:14:27.656 ], 00:14:27.656 "product_name": "Malloc disk", 00:14:27.656 "block_size": 512, 00:14:27.656 "num_blocks": 65536, 00:14:27.656 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:27.656 "assigned_rate_limits": { 00:14:27.656 "rw_ios_per_sec": 0, 00:14:27.656 "rw_mbytes_per_sec": 0, 00:14:27.656 "r_mbytes_per_sec": 0, 00:14:27.656 "w_mbytes_per_sec": 0 00:14:27.656 }, 00:14:27.656 "claimed": true, 00:14:27.656 "claim_type": "exclusive_write", 00:14:27.656 "zoned": false, 00:14:27.656 "supported_io_types": { 00:14:27.656 "read": true, 00:14:27.656 "write": true, 00:14:27.656 "unmap": true, 00:14:27.656 "write_zeroes": true, 00:14:27.656 "flush": true, 00:14:27.656 "reset": true, 00:14:27.656 "compare": false, 00:14:27.656 "compare_and_write": false, 00:14:27.656 "abort": true, 00:14:27.656 "nvme_admin": false, 00:14:27.656 "nvme_io": false 00:14:27.656 }, 00:14:27.656 "memory_domains": [ 00:14:27.656 { 00:14:27.656 "dma_device_id": "system", 00:14:27.656 "dma_device_type": 1 00:14:27.656 }, 00:14:27.656 { 00:14:27.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.656 "dma_device_type": 2 00:14:27.656 } 00:14:27.656 ], 00:14:27.656 "driver_specific": {} 00:14:27.656 } 00:14:27.656 ] 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.656 10:18:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.656 10:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.656 "name": "Existed_Raid", 00:14:27.656 "uuid": "c9e6b29e-2712-11ef-b084-113036b5c18d", 00:14:27.656 "strip_size_kb": 64, 00:14:27.656 "state": "configuring", 00:14:27.656 "raid_level": "concat", 00:14:27.656 "superblock": true, 00:14:27.656 "num_base_bdevs": 4, 00:14:27.656 "num_base_bdevs_discovered": 1, 00:14:27.656 "num_base_bdevs_operational": 4, 00:14:27.656 "base_bdevs_list": [ 00:14:27.656 { 00:14:27.656 "name": "BaseBdev1", 00:14:27.656 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:27.656 "is_configured": true, 00:14:27.656 "data_offset": 2048, 00:14:27.656 "data_size": 63488 00:14:27.656 }, 00:14:27.656 { 00:14:27.656 "name": "BaseBdev2", 00:14:27.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.656 "is_configured": false, 00:14:27.656 "data_offset": 0, 00:14:27.656 "data_size": 0 00:14:27.656 }, 00:14:27.656 { 00:14:27.657 "name": "BaseBdev3", 00:14:27.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.657 "is_configured": false, 00:14:27.657 "data_offset": 0, 00:14:27.657 "data_size": 0 00:14:27.657 }, 00:14:27.657 { 00:14:27.657 "name": "BaseBdev4", 00:14:27.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.657 "is_configured": false, 00:14:27.657 "data_offset": 0, 00:14:27.657 "data_size": 0 00:14:27.657 } 00:14:27.657 ] 00:14:27.657 }' 00:14:27.657 10:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.657 10:18:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.223 10:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.223 [2024-06-10 10:18:33.744927] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.223 [2024-06-10 10:18:33.744955] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8bd500 name Existed_Raid, state configuring 00:14:28.223 10:18:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:28.481 [2024-06-10 10:18:33.984976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.481 [2024-06-10 10:18:33.985699] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.481 [2024-06-10 10:18:33.985748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.481 [2024-06-10 10:18:33.985758] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:28.481 [2024-06-10 10:18:33.985773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:28.481 [2024-06-10 10:18:33.985782] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:28.481 [2024-06-10 10:18:33.985797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.481 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.740 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.740 "name": "Existed_Raid", 00:14:28.740 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:28.740 "strip_size_kb": 64, 00:14:28.740 "state": "configuring", 00:14:28.740 "raid_level": "concat", 00:14:28.740 "superblock": true, 00:14:28.740 "num_base_bdevs": 4, 00:14:28.740 "num_base_bdevs_discovered": 1, 00:14:28.740 "num_base_bdevs_operational": 4, 00:14:28.740 "base_bdevs_list": [ 00:14:28.740 { 00:14:28.740 "name": "BaseBdev1", 00:14:28.740 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:28.740 "is_configured": true, 00:14:28.740 "data_offset": 2048, 00:14:28.740 "data_size": 63488 00:14:28.740 }, 00:14:28.740 { 00:14:28.740 "name": "BaseBdev2", 00:14:28.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.740 "is_configured": false, 00:14:28.740 "data_offset": 0, 00:14:28.740 "data_size": 0 00:14:28.740 }, 00:14:28.740 { 00:14:28.740 "name": "BaseBdev3", 00:14:28.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.740 "is_configured": false, 00:14:28.740 "data_offset": 0, 00:14:28.740 "data_size": 0 00:14:28.740 }, 00:14:28.740 { 00:14:28.740 "name": "BaseBdev4", 00:14:28.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.740 "is_configured": false, 00:14:28.740 "data_offset": 0, 00:14:28.740 "data_size": 0 00:14:28.740 } 00:14:28.740 ] 00:14:28.740 }' 00:14:28.740 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.998 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.256 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.555 [2024-06-10 10:18:34.949120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.555 BaseBdev2 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:29.555 10:18:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:29.814 10:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.072 [ 00:14:30.072 { 00:14:30.072 "name": "BaseBdev2", 00:14:30.072 "aliases": [ 00:14:30.072 "cb8a0768-2712-11ef-b084-113036b5c18d" 00:14:30.072 ], 00:14:30.072 "product_name": "Malloc disk", 00:14:30.072 "block_size": 512, 00:14:30.072 "num_blocks": 65536, 00:14:30.072 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:30.072 "assigned_rate_limits": { 00:14:30.072 "rw_ios_per_sec": 0, 00:14:30.072 "rw_mbytes_per_sec": 0, 00:14:30.072 "r_mbytes_per_sec": 0, 00:14:30.072 "w_mbytes_per_sec": 0 00:14:30.072 }, 00:14:30.072 "claimed": true, 00:14:30.072 "claim_type": "exclusive_write", 00:14:30.072 "zoned": false, 00:14:30.072 "supported_io_types": { 00:14:30.072 "read": true, 00:14:30.072 "write": true, 00:14:30.072 "unmap": true, 00:14:30.072 "write_zeroes": true, 00:14:30.072 "flush": true, 00:14:30.072 "reset": true, 00:14:30.072 "compare": false, 00:14:30.072 "compare_and_write": false, 00:14:30.072 "abort": true, 00:14:30.072 "nvme_admin": false, 00:14:30.072 "nvme_io": false 00:14:30.072 }, 00:14:30.072 "memory_domains": [ 00:14:30.072 { 00:14:30.072 "dma_device_id": "system", 00:14:30.072 "dma_device_type": 1 00:14:30.072 }, 00:14:30.072 { 00:14:30.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.072 "dma_device_type": 2 00:14:30.072 } 00:14:30.072 ], 00:14:30.072 "driver_specific": {} 00:14:30.072 } 00:14:30.072 ] 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.072 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.330 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.330 "name": "Existed_Raid", 00:14:30.330 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:30.330 "strip_size_kb": 64, 00:14:30.330 "state": "configuring", 00:14:30.330 "raid_level": "concat", 00:14:30.330 "superblock": true, 00:14:30.330 "num_base_bdevs": 4, 00:14:30.330 "num_base_bdevs_discovered": 2, 00:14:30.330 "num_base_bdevs_operational": 4, 00:14:30.330 "base_bdevs_list": [ 00:14:30.330 { 00:14:30.330 "name": "BaseBdev1", 00:14:30.330 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:30.330 "is_configured": true, 00:14:30.330 "data_offset": 2048, 00:14:30.330 "data_size": 63488 00:14:30.330 }, 00:14:30.330 { 00:14:30.330 "name": "BaseBdev2", 00:14:30.330 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:30.330 "is_configured": true, 00:14:30.330 "data_offset": 2048, 00:14:30.330 "data_size": 63488 00:14:30.330 }, 00:14:30.330 { 00:14:30.330 "name": "BaseBdev3", 00:14:30.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.330 "is_configured": false, 00:14:30.330 "data_offset": 0, 00:14:30.330 "data_size": 0 00:14:30.330 }, 00:14:30.330 { 00:14:30.330 "name": "BaseBdev4", 00:14:30.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.330 "is_configured": false, 00:14:30.330 "data_offset": 0, 00:14:30.330 "data_size": 0 00:14:30.330 } 00:14:30.330 ] 00:14:30.330 }' 00:14:30.330 10:18:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.330 10:18:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:30.896 [2024-06-10 10:18:36.469174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:30.896 BaseBdev3 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:30.896 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.464 10:18:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.464 [ 00:14:31.464 { 00:14:31.464 "name": "BaseBdev3", 00:14:31.464 "aliases": [ 00:14:31.464 "cc71f96c-2712-11ef-b084-113036b5c18d" 00:14:31.464 ], 00:14:31.464 "product_name": "Malloc disk", 00:14:31.464 "block_size": 512, 00:14:31.464 "num_blocks": 65536, 00:14:31.464 "uuid": "cc71f96c-2712-11ef-b084-113036b5c18d", 00:14:31.464 "assigned_rate_limits": { 00:14:31.464 "rw_ios_per_sec": 0, 00:14:31.464 "rw_mbytes_per_sec": 0, 00:14:31.464 "r_mbytes_per_sec": 0, 00:14:31.464 "w_mbytes_per_sec": 0 00:14:31.464 }, 00:14:31.464 "claimed": true, 00:14:31.464 "claim_type": "exclusive_write", 00:14:31.464 "zoned": false, 00:14:31.464 "supported_io_types": { 00:14:31.464 "read": true, 00:14:31.464 "write": true, 00:14:31.464 "unmap": true, 00:14:31.464 "write_zeroes": true, 00:14:31.464 "flush": true, 00:14:31.464 "reset": true, 00:14:31.464 "compare": false, 00:14:31.464 "compare_and_write": false, 00:14:31.464 "abort": true, 00:14:31.464 "nvme_admin": false, 00:14:31.464 "nvme_io": false 00:14:31.464 }, 00:14:31.464 "memory_domains": [ 00:14:31.464 { 00:14:31.464 "dma_device_id": "system", 00:14:31.464 "dma_device_type": 1 00:14:31.464 }, 00:14:31.464 { 00:14:31.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.464 "dma_device_type": 2 00:14:31.464 } 00:14:31.464 ], 00:14:31.464 "driver_specific": {} 00:14:31.464 } 00:14:31.464 ] 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:31.722 "name": "Existed_Raid", 00:14:31.722 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:31.722 "strip_size_kb": 64, 00:14:31.722 "state": "configuring", 00:14:31.722 "raid_level": "concat", 00:14:31.722 "superblock": true, 00:14:31.722 "num_base_bdevs": 4, 00:14:31.722 "num_base_bdevs_discovered": 3, 00:14:31.722 "num_base_bdevs_operational": 4, 00:14:31.722 "base_bdevs_list": [ 00:14:31.722 { 00:14:31.722 "name": "BaseBdev1", 00:14:31.722 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:31.722 "is_configured": true, 00:14:31.722 "data_offset": 2048, 00:14:31.722 "data_size": 63488 00:14:31.722 }, 00:14:31.722 { 00:14:31.722 "name": "BaseBdev2", 00:14:31.722 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:31.722 "is_configured": true, 00:14:31.722 "data_offset": 2048, 00:14:31.722 "data_size": 63488 00:14:31.722 }, 00:14:31.722 { 00:14:31.722 "name": "BaseBdev3", 00:14:31.722 "uuid": "cc71f96c-2712-11ef-b084-113036b5c18d", 00:14:31.722 "is_configured": true, 00:14:31.722 "data_offset": 2048, 00:14:31.722 "data_size": 63488 00:14:31.722 }, 00:14:31.722 { 00:14:31.722 "name": "BaseBdev4", 00:14:31.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.722 "is_configured": false, 00:14:31.722 "data_offset": 0, 00:14:31.722 "data_size": 0 00:14:31.722 } 00:14:31.722 ] 00:14:31.722 }' 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:31.722 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:32.288 [2024-06-10 10:18:37.885247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:32.288 [2024-06-10 10:18:37.885311] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b8bda00 00:14:32.288 [2024-06-10 10:18:37.885316] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:32.288 [2024-06-10 10:18:37.885333] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b920ec0 00:14:32.288 [2024-06-10 10:18:37.885371] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b8bda00 00:14:32.288 [2024-06-10 10:18:37.885375] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b8bda00 00:14:32.288 [2024-06-10 10:18:37.885389] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.288 BaseBdev4 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:32.547 10:18:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:32.807 [ 00:14:32.807 { 00:14:32.807 "name": "BaseBdev4", 00:14:32.807 "aliases": [ 00:14:32.807 "cd4a0cbf-2712-11ef-b084-113036b5c18d" 00:14:32.807 ], 00:14:32.807 "product_name": "Malloc disk", 00:14:32.807 "block_size": 512, 00:14:32.807 "num_blocks": 65536, 00:14:32.807 "uuid": "cd4a0cbf-2712-11ef-b084-113036b5c18d", 00:14:32.807 "assigned_rate_limits": { 00:14:32.807 "rw_ios_per_sec": 0, 00:14:32.807 "rw_mbytes_per_sec": 0, 00:14:32.807 "r_mbytes_per_sec": 0, 00:14:32.807 "w_mbytes_per_sec": 0 00:14:32.807 }, 00:14:32.807 "claimed": true, 00:14:32.807 "claim_type": "exclusive_write", 00:14:32.807 "zoned": false, 00:14:32.807 "supported_io_types": { 00:14:32.807 "read": true, 00:14:32.807 "write": true, 00:14:32.807 "unmap": true, 00:14:32.807 "write_zeroes": true, 00:14:32.807 "flush": true, 00:14:32.807 "reset": true, 00:14:32.807 "compare": false, 00:14:32.807 "compare_and_write": false, 00:14:32.807 "abort": true, 00:14:32.807 "nvme_admin": false, 00:14:32.807 "nvme_io": false 00:14:32.807 }, 00:14:32.807 "memory_domains": [ 00:14:32.807 { 00:14:32.807 "dma_device_id": "system", 00:14:32.807 "dma_device_type": 1 00:14:32.807 }, 00:14:32.807 { 00:14:32.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.807 "dma_device_type": 2 00:14:32.807 } 00:14:32.807 ], 00:14:32.807 "driver_specific": {} 00:14:32.807 } 00:14:32.807 ] 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.807 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.374 "name": "Existed_Raid", 00:14:33.374 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:33.374 "strip_size_kb": 64, 00:14:33.374 "state": "online", 00:14:33.374 "raid_level": "concat", 00:14:33.374 "superblock": true, 00:14:33.374 "num_base_bdevs": 4, 00:14:33.374 "num_base_bdevs_discovered": 4, 00:14:33.374 "num_base_bdevs_operational": 4, 00:14:33.374 "base_bdevs_list": [ 00:14:33.374 { 00:14:33.374 "name": "BaseBdev1", 00:14:33.374 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:33.374 "is_configured": true, 00:14:33.374 "data_offset": 2048, 00:14:33.374 "data_size": 63488 00:14:33.374 }, 00:14:33.374 { 00:14:33.374 "name": "BaseBdev2", 00:14:33.374 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:33.374 "is_configured": true, 00:14:33.374 "data_offset": 2048, 00:14:33.374 "data_size": 63488 00:14:33.374 }, 00:14:33.374 { 00:14:33.374 "name": "BaseBdev3", 00:14:33.374 "uuid": "cc71f96c-2712-11ef-b084-113036b5c18d", 00:14:33.374 "is_configured": true, 00:14:33.374 "data_offset": 2048, 00:14:33.374 "data_size": 63488 00:14:33.374 }, 00:14:33.374 { 00:14:33.374 "name": "BaseBdev4", 00:14:33.374 "uuid": "cd4a0cbf-2712-11ef-b084-113036b5c18d", 00:14:33.374 "is_configured": true, 00:14:33.374 "data_offset": 2048, 00:14:33.374 "data_size": 63488 00:14:33.374 } 00:14:33.374 ] 00:14:33.374 }' 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:33.374 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:33.632 [2024-06-10 10:18:39.149216] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.632 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:33.632 "name": "Existed_Raid", 00:14:33.632 "aliases": [ 00:14:33.632 "caf6edb7-2712-11ef-b084-113036b5c18d" 00:14:33.632 ], 00:14:33.632 "product_name": "Raid Volume", 00:14:33.632 "block_size": 512, 00:14:33.632 "num_blocks": 253952, 00:14:33.632 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:33.632 "assigned_rate_limits": { 00:14:33.632 "rw_ios_per_sec": 0, 00:14:33.632 "rw_mbytes_per_sec": 0, 00:14:33.632 "r_mbytes_per_sec": 0, 00:14:33.632 "w_mbytes_per_sec": 0 00:14:33.632 }, 00:14:33.632 "claimed": false, 00:14:33.632 "zoned": false, 00:14:33.632 "supported_io_types": { 00:14:33.632 "read": true, 00:14:33.632 "write": true, 00:14:33.632 "unmap": true, 00:14:33.632 "write_zeroes": true, 00:14:33.632 "flush": true, 00:14:33.633 "reset": true, 00:14:33.633 "compare": false, 00:14:33.633 "compare_and_write": false, 00:14:33.633 "abort": false, 00:14:33.633 "nvme_admin": false, 00:14:33.633 "nvme_io": false 00:14:33.633 }, 00:14:33.633 "memory_domains": [ 00:14:33.633 { 00:14:33.633 "dma_device_id": "system", 00:14:33.633 "dma_device_type": 1 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.633 "dma_device_type": 2 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "system", 00:14:33.633 "dma_device_type": 1 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.633 "dma_device_type": 2 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "system", 00:14:33.633 "dma_device_type": 1 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.633 "dma_device_type": 2 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "system", 00:14:33.633 "dma_device_type": 1 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.633 "dma_device_type": 2 00:14:33.633 } 00:14:33.633 ], 00:14:33.633 "driver_specific": { 00:14:33.633 "raid": { 00:14:33.633 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:33.633 "strip_size_kb": 64, 00:14:33.633 "state": "online", 00:14:33.633 "raid_level": "concat", 00:14:33.633 "superblock": true, 00:14:33.633 "num_base_bdevs": 4, 00:14:33.633 "num_base_bdevs_discovered": 4, 00:14:33.633 "num_base_bdevs_operational": 4, 00:14:33.633 "base_bdevs_list": [ 00:14:33.633 { 00:14:33.633 "name": "BaseBdev1", 00:14:33.633 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:33.633 "is_configured": true, 00:14:33.633 "data_offset": 2048, 00:14:33.633 "data_size": 63488 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "name": "BaseBdev2", 00:14:33.633 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:33.633 "is_configured": true, 00:14:33.633 "data_offset": 2048, 00:14:33.633 "data_size": 63488 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "name": "BaseBdev3", 00:14:33.633 "uuid": "cc71f96c-2712-11ef-b084-113036b5c18d", 00:14:33.633 "is_configured": true, 00:14:33.633 "data_offset": 2048, 00:14:33.633 "data_size": 63488 00:14:33.633 }, 00:14:33.633 { 00:14:33.633 "name": "BaseBdev4", 00:14:33.633 "uuid": "cd4a0cbf-2712-11ef-b084-113036b5c18d", 00:14:33.633 "is_configured": true, 00:14:33.633 "data_offset": 2048, 00:14:33.633 "data_size": 63488 00:14:33.633 } 00:14:33.633 ] 00:14:33.633 } 00:14:33.633 } 00:14:33.633 }' 00:14:33.633 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.633 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:33.633 BaseBdev2 00:14:33.633 BaseBdev3 00:14:33.633 BaseBdev4' 00:14:33.633 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.633 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.633 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:33.892 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.892 "name": "BaseBdev1", 00:14:33.892 "aliases": [ 00:14:33.892 "ca13ddb8-2712-11ef-b084-113036b5c18d" 00:14:33.892 ], 00:14:33.892 "product_name": "Malloc disk", 00:14:33.892 "block_size": 512, 00:14:33.892 "num_blocks": 65536, 00:14:33.892 "uuid": "ca13ddb8-2712-11ef-b084-113036b5c18d", 00:14:33.892 "assigned_rate_limits": { 00:14:33.892 "rw_ios_per_sec": 0, 00:14:33.892 "rw_mbytes_per_sec": 0, 00:14:33.892 "r_mbytes_per_sec": 0, 00:14:33.892 "w_mbytes_per_sec": 0 00:14:33.892 }, 00:14:33.892 "claimed": true, 00:14:33.892 "claim_type": "exclusive_write", 00:14:33.892 "zoned": false, 00:14:33.892 "supported_io_types": { 00:14:33.892 "read": true, 00:14:33.892 "write": true, 00:14:33.892 "unmap": true, 00:14:33.892 "write_zeroes": true, 00:14:33.892 "flush": true, 00:14:33.892 "reset": true, 00:14:33.892 "compare": false, 00:14:33.892 "compare_and_write": false, 00:14:33.892 "abort": true, 00:14:33.892 "nvme_admin": false, 00:14:33.892 "nvme_io": false 00:14:33.892 }, 00:14:33.892 "memory_domains": [ 00:14:33.892 { 00:14:33.892 "dma_device_id": "system", 00:14:33.892 "dma_device_type": 1 00:14:33.892 }, 00:14:33.892 { 00:14:33.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.892 "dma_device_type": 2 00:14:33.892 } 00:14:33.892 ], 00:14:33.892 "driver_specific": {} 00:14:33.892 }' 00:14:33.892 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:34.153 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.412 "name": "BaseBdev2", 00:14:34.412 "aliases": [ 00:14:34.412 "cb8a0768-2712-11ef-b084-113036b5c18d" 00:14:34.412 ], 00:14:34.412 "product_name": "Malloc disk", 00:14:34.412 "block_size": 512, 00:14:34.412 "num_blocks": 65536, 00:14:34.412 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:34.412 "assigned_rate_limits": { 00:14:34.412 "rw_ios_per_sec": 0, 00:14:34.412 "rw_mbytes_per_sec": 0, 00:14:34.412 "r_mbytes_per_sec": 0, 00:14:34.412 "w_mbytes_per_sec": 0 00:14:34.412 }, 00:14:34.412 "claimed": true, 00:14:34.412 "claim_type": "exclusive_write", 00:14:34.412 "zoned": false, 00:14:34.412 "supported_io_types": { 00:14:34.412 "read": true, 00:14:34.412 "write": true, 00:14:34.412 "unmap": true, 00:14:34.412 "write_zeroes": true, 00:14:34.412 "flush": true, 00:14:34.412 "reset": true, 00:14:34.412 "compare": false, 00:14:34.412 "compare_and_write": false, 00:14:34.412 "abort": true, 00:14:34.412 "nvme_admin": false, 00:14:34.412 "nvme_io": false 00:14:34.412 }, 00:14:34.412 "memory_domains": [ 00:14:34.412 { 00:14:34.412 "dma_device_id": "system", 00:14:34.412 "dma_device_type": 1 00:14:34.412 }, 00:14:34.412 { 00:14:34.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.412 "dma_device_type": 2 00:14:34.412 } 00:14:34.412 ], 00:14:34.412 "driver_specific": {} 00:14:34.412 }' 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:34.412 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.672 "name": "BaseBdev3", 00:14:34.672 "aliases": [ 00:14:34.672 "cc71f96c-2712-11ef-b084-113036b5c18d" 00:14:34.672 ], 00:14:34.672 "product_name": "Malloc disk", 00:14:34.672 "block_size": 512, 00:14:34.672 "num_blocks": 65536, 00:14:34.672 "uuid": "cc71f96c-2712-11ef-b084-113036b5c18d", 00:14:34.672 "assigned_rate_limits": { 00:14:34.672 "rw_ios_per_sec": 0, 00:14:34.672 "rw_mbytes_per_sec": 0, 00:14:34.672 "r_mbytes_per_sec": 0, 00:14:34.672 "w_mbytes_per_sec": 0 00:14:34.672 }, 00:14:34.672 "claimed": true, 00:14:34.672 "claim_type": "exclusive_write", 00:14:34.672 "zoned": false, 00:14:34.672 "supported_io_types": { 00:14:34.672 "read": true, 00:14:34.672 "write": true, 00:14:34.672 "unmap": true, 00:14:34.672 "write_zeroes": true, 00:14:34.672 "flush": true, 00:14:34.672 "reset": true, 00:14:34.672 "compare": false, 00:14:34.672 "compare_and_write": false, 00:14:34.672 "abort": true, 00:14:34.672 "nvme_admin": false, 00:14:34.672 "nvme_io": false 00:14:34.672 }, 00:14:34.672 "memory_domains": [ 00:14:34.672 { 00:14:34.672 "dma_device_id": "system", 00:14:34.672 "dma_device_type": 1 00:14:34.672 }, 00:14:34.672 { 00:14:34.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.672 "dma_device_type": 2 00:14:34.672 } 00:14:34.672 ], 00:14:34.672 "driver_specific": {} 00:14:34.672 }' 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.672 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:34.930 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.930 "name": "BaseBdev4", 00:14:34.930 "aliases": [ 00:14:34.930 "cd4a0cbf-2712-11ef-b084-113036b5c18d" 00:14:34.930 ], 00:14:34.930 "product_name": "Malloc disk", 00:14:34.930 "block_size": 512, 00:14:34.930 "num_blocks": 65536, 00:14:34.930 "uuid": "cd4a0cbf-2712-11ef-b084-113036b5c18d", 00:14:34.930 "assigned_rate_limits": { 00:14:34.930 "rw_ios_per_sec": 0, 00:14:34.930 "rw_mbytes_per_sec": 0, 00:14:34.930 "r_mbytes_per_sec": 0, 00:14:34.930 "w_mbytes_per_sec": 0 00:14:34.930 }, 00:14:34.930 "claimed": true, 00:14:34.930 "claim_type": "exclusive_write", 00:14:34.930 "zoned": false, 00:14:34.930 "supported_io_types": { 00:14:34.930 "read": true, 00:14:34.930 "write": true, 00:14:34.930 "unmap": true, 00:14:34.930 "write_zeroes": true, 00:14:34.930 "flush": true, 00:14:34.930 "reset": true, 00:14:34.930 "compare": false, 00:14:34.930 "compare_and_write": false, 00:14:34.930 "abort": true, 00:14:34.930 "nvme_admin": false, 00:14:34.930 "nvme_io": false 00:14:34.930 }, 00:14:34.930 "memory_domains": [ 00:14:34.930 { 00:14:34.930 "dma_device_id": "system", 00:14:34.930 "dma_device_type": 1 00:14:34.930 }, 00:14:34.930 { 00:14:34.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.930 "dma_device_type": 2 00:14:34.930 } 00:14:34.930 ], 00:14:34.930 "driver_specific": {} 00:14:34.930 }' 00:14:34.930 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.930 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.930 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.930 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.930 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:35.189 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:35.448 [2024-06-10 10:18:40.877283] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.448 [2024-06-10 10:18:40.877304] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.448 [2024-06-10 10:18:40.877316] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.448 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.706 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.706 "name": "Existed_Raid", 00:14:35.706 "uuid": "caf6edb7-2712-11ef-b084-113036b5c18d", 00:14:35.706 "strip_size_kb": 64, 00:14:35.706 "state": "offline", 00:14:35.706 "raid_level": "concat", 00:14:35.706 "superblock": true, 00:14:35.706 "num_base_bdevs": 4, 00:14:35.706 "num_base_bdevs_discovered": 3, 00:14:35.706 "num_base_bdevs_operational": 3, 00:14:35.706 "base_bdevs_list": [ 00:14:35.706 { 00:14:35.706 "name": null, 00:14:35.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.706 "is_configured": false, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "BaseBdev2", 00:14:35.706 "uuid": "cb8a0768-2712-11ef-b084-113036b5c18d", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "BaseBdev3", 00:14:35.706 "uuid": "cc71f96c-2712-11ef-b084-113036b5c18d", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "BaseBdev4", 00:14:35.706 "uuid": "cd4a0cbf-2712-11ef-b084-113036b5c18d", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 } 00:14:35.706 ] 00:14:35.706 }' 00:14:35.706 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.707 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.966 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:35.966 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:35.966 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.966 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:36.225 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:36.225 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:36.225 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:36.484 [2024-06-10 10:18:42.022120] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.484 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:36.484 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:36.484 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:36.484 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.742 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:36.742 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:36.742 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:37.000 [2024-06-10 10:18:42.566870] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:37.000 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:37.000 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:37.000 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.000 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:37.258 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:37.258 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:37.258 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:37.516 [2024-06-10 10:18:43.015631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:37.516 [2024-06-10 10:18:43.015661] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8bda00 name Existed_Raid, state offline 00:14:37.516 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:37.517 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:37.517 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.517 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:37.775 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:37.775 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:37.775 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:37.775 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:37.775 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:37.775 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.033 BaseBdev2 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:38.033 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.290 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.548 [ 00:14:38.548 { 00:14:38.548 "name": "BaseBdev2", 00:14:38.548 "aliases": [ 00:14:38.548 "d0af16fd-2712-11ef-b084-113036b5c18d" 00:14:38.548 ], 00:14:38.548 "product_name": "Malloc disk", 00:14:38.548 "block_size": 512, 00:14:38.548 "num_blocks": 65536, 00:14:38.548 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:38.548 "assigned_rate_limits": { 00:14:38.548 "rw_ios_per_sec": 0, 00:14:38.548 "rw_mbytes_per_sec": 0, 00:14:38.548 "r_mbytes_per_sec": 0, 00:14:38.548 "w_mbytes_per_sec": 0 00:14:38.548 }, 00:14:38.548 "claimed": false, 00:14:38.548 "zoned": false, 00:14:38.548 "supported_io_types": { 00:14:38.548 "read": true, 00:14:38.548 "write": true, 00:14:38.548 "unmap": true, 00:14:38.548 "write_zeroes": true, 00:14:38.548 "flush": true, 00:14:38.548 "reset": true, 00:14:38.548 "compare": false, 00:14:38.548 "compare_and_write": false, 00:14:38.548 "abort": true, 00:14:38.548 "nvme_admin": false, 00:14:38.548 "nvme_io": false 00:14:38.548 }, 00:14:38.548 "memory_domains": [ 00:14:38.548 { 00:14:38.548 "dma_device_id": "system", 00:14:38.548 "dma_device_type": 1 00:14:38.548 }, 00:14:38.548 { 00:14:38.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.548 "dma_device_type": 2 00:14:38.548 } 00:14:38.548 ], 00:14:38.548 "driver_specific": {} 00:14:38.548 } 00:14:38.548 ] 00:14:38.805 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:38.805 10:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:38.805 10:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:38.805 10:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.064 BaseBdev3 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:39.064 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.322 10:18:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.579 [ 00:14:39.579 { 00:14:39.579 "name": "BaseBdev3", 00:14:39.579 "aliases": [ 00:14:39.579 "d1338a57-2712-11ef-b084-113036b5c18d" 00:14:39.579 ], 00:14:39.579 "product_name": "Malloc disk", 00:14:39.579 "block_size": 512, 00:14:39.579 "num_blocks": 65536, 00:14:39.579 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:39.579 "assigned_rate_limits": { 00:14:39.579 "rw_ios_per_sec": 0, 00:14:39.579 "rw_mbytes_per_sec": 0, 00:14:39.579 "r_mbytes_per_sec": 0, 00:14:39.579 "w_mbytes_per_sec": 0 00:14:39.579 }, 00:14:39.579 "claimed": false, 00:14:39.579 "zoned": false, 00:14:39.579 "supported_io_types": { 00:14:39.579 "read": true, 00:14:39.579 "write": true, 00:14:39.579 "unmap": true, 00:14:39.579 "write_zeroes": true, 00:14:39.579 "flush": true, 00:14:39.579 "reset": true, 00:14:39.579 "compare": false, 00:14:39.579 "compare_and_write": false, 00:14:39.579 "abort": true, 00:14:39.579 "nvme_admin": false, 00:14:39.579 "nvme_io": false 00:14:39.579 }, 00:14:39.579 "memory_domains": [ 00:14:39.579 { 00:14:39.579 "dma_device_id": "system", 00:14:39.579 "dma_device_type": 1 00:14:39.579 }, 00:14:39.579 { 00:14:39.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.579 "dma_device_type": 2 00:14:39.579 } 00:14:39.579 ], 00:14:39.579 "driver_specific": {} 00:14:39.579 } 00:14:39.579 ] 00:14:39.579 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:39.579 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:39.579 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:39.579 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:39.837 BaseBdev4 00:14:39.837 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:39.837 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:14:39.837 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:39.837 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:39.837 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:39.838 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:39.838 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.096 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:40.355 [ 00:14:40.355 { 00:14:40.355 "name": "BaseBdev4", 00:14:40.355 "aliases": [ 00:14:40.355 "d1beb4d1-2712-11ef-b084-113036b5c18d" 00:14:40.355 ], 00:14:40.355 "product_name": "Malloc disk", 00:14:40.355 "block_size": 512, 00:14:40.355 "num_blocks": 65536, 00:14:40.355 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:40.355 "assigned_rate_limits": { 00:14:40.355 "rw_ios_per_sec": 0, 00:14:40.355 "rw_mbytes_per_sec": 0, 00:14:40.355 "r_mbytes_per_sec": 0, 00:14:40.355 "w_mbytes_per_sec": 0 00:14:40.355 }, 00:14:40.355 "claimed": false, 00:14:40.355 "zoned": false, 00:14:40.355 "supported_io_types": { 00:14:40.355 "read": true, 00:14:40.355 "write": true, 00:14:40.355 "unmap": true, 00:14:40.355 "write_zeroes": true, 00:14:40.355 "flush": true, 00:14:40.355 "reset": true, 00:14:40.355 "compare": false, 00:14:40.355 "compare_and_write": false, 00:14:40.355 "abort": true, 00:14:40.355 "nvme_admin": false, 00:14:40.355 "nvme_io": false 00:14:40.355 }, 00:14:40.355 "memory_domains": [ 00:14:40.355 { 00:14:40.355 "dma_device_id": "system", 00:14:40.355 "dma_device_type": 1 00:14:40.355 }, 00:14:40.355 { 00:14:40.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.355 "dma_device_type": 2 00:14:40.355 } 00:14:40.355 ], 00:14:40.355 "driver_specific": {} 00:14:40.355 } 00:14:40.355 ] 00:14:40.355 10:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:40.355 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:40.355 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:40.355 10:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:40.614 [2024-06-10 10:18:46.104591] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.614 [2024-06-10 10:18:46.104660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.614 [2024-06-10 10:18:46.104667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.614 [2024-06-10 10:18:46.105065] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.614 [2024-06-10 10:18:46.105077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.614 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.874 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.874 "name": "Existed_Raid", 00:14:40.874 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:40.874 "strip_size_kb": 64, 00:14:40.874 "state": "configuring", 00:14:40.874 "raid_level": "concat", 00:14:40.874 "superblock": true, 00:14:40.874 "num_base_bdevs": 4, 00:14:40.874 "num_base_bdevs_discovered": 3, 00:14:40.874 "num_base_bdevs_operational": 4, 00:14:40.874 "base_bdevs_list": [ 00:14:40.874 { 00:14:40.874 "name": "BaseBdev1", 00:14:40.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.874 "is_configured": false, 00:14:40.874 "data_offset": 0, 00:14:40.874 "data_size": 0 00:14:40.874 }, 00:14:40.874 { 00:14:40.874 "name": "BaseBdev2", 00:14:40.874 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:40.874 "is_configured": true, 00:14:40.874 "data_offset": 2048, 00:14:40.874 "data_size": 63488 00:14:40.874 }, 00:14:40.874 { 00:14:40.874 "name": "BaseBdev3", 00:14:40.874 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:40.874 "is_configured": true, 00:14:40.874 "data_offset": 2048, 00:14:40.874 "data_size": 63488 00:14:40.874 }, 00:14:40.874 { 00:14:40.874 "name": "BaseBdev4", 00:14:40.874 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:40.874 "is_configured": true, 00:14:40.874 "data_offset": 2048, 00:14:40.874 "data_size": 63488 00:14:40.874 } 00:14:40.874 ] 00:14:40.874 }' 00:14:40.874 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.874 10:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.133 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:41.393 [2024-06-10 10:18:46.972614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.393 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.393 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.393 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.393 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.393 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.651 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:41.651 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.651 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.651 10:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.651 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.651 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.651 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.910 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.910 "name": "Existed_Raid", 00:14:41.910 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:41.910 "strip_size_kb": 64, 00:14:41.910 "state": "configuring", 00:14:41.910 "raid_level": "concat", 00:14:41.910 "superblock": true, 00:14:41.910 "num_base_bdevs": 4, 00:14:41.910 "num_base_bdevs_discovered": 2, 00:14:41.910 "num_base_bdevs_operational": 4, 00:14:41.910 "base_bdevs_list": [ 00:14:41.910 { 00:14:41.910 "name": "BaseBdev1", 00:14:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.910 "is_configured": false, 00:14:41.910 "data_offset": 0, 00:14:41.910 "data_size": 0 00:14:41.910 }, 00:14:41.910 { 00:14:41.910 "name": null, 00:14:41.910 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:41.910 "is_configured": false, 00:14:41.910 "data_offset": 2048, 00:14:41.910 "data_size": 63488 00:14:41.910 }, 00:14:41.910 { 00:14:41.910 "name": "BaseBdev3", 00:14:41.910 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:41.910 "is_configured": true, 00:14:41.910 "data_offset": 2048, 00:14:41.910 "data_size": 63488 00:14:41.910 }, 00:14:41.910 { 00:14:41.910 "name": "BaseBdev4", 00:14:41.910 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:41.910 "is_configured": true, 00:14:41.910 "data_offset": 2048, 00:14:41.910 "data_size": 63488 00:14:41.910 } 00:14:41.910 ] 00:14:41.910 }' 00:14:41.910 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.910 10:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.168 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.168 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:42.424 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:42.424 10:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:42.683 [2024-06-10 10:18:48.120757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:42.683 BaseBdev1 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:42.683 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.939 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.197 [ 00:14:43.197 { 00:14:43.197 "name": "BaseBdev1", 00:14:43.197 "aliases": [ 00:14:43.197 "d363dd78-2712-11ef-b084-113036b5c18d" 00:14:43.197 ], 00:14:43.197 "product_name": "Malloc disk", 00:14:43.197 "block_size": 512, 00:14:43.197 "num_blocks": 65536, 00:14:43.197 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:43.197 "assigned_rate_limits": { 00:14:43.197 "rw_ios_per_sec": 0, 00:14:43.197 "rw_mbytes_per_sec": 0, 00:14:43.197 "r_mbytes_per_sec": 0, 00:14:43.197 "w_mbytes_per_sec": 0 00:14:43.197 }, 00:14:43.197 "claimed": true, 00:14:43.197 "claim_type": "exclusive_write", 00:14:43.197 "zoned": false, 00:14:43.197 "supported_io_types": { 00:14:43.197 "read": true, 00:14:43.197 "write": true, 00:14:43.197 "unmap": true, 00:14:43.197 "write_zeroes": true, 00:14:43.197 "flush": true, 00:14:43.197 "reset": true, 00:14:43.197 "compare": false, 00:14:43.197 "compare_and_write": false, 00:14:43.197 "abort": true, 00:14:43.197 "nvme_admin": false, 00:14:43.197 "nvme_io": false 00:14:43.197 }, 00:14:43.197 "memory_domains": [ 00:14:43.197 { 00:14:43.197 "dma_device_id": "system", 00:14:43.197 "dma_device_type": 1 00:14:43.197 }, 00:14:43.197 { 00:14:43.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.197 "dma_device_type": 2 00:14:43.197 } 00:14:43.197 ], 00:14:43.197 "driver_specific": {} 00:14:43.197 } 00:14:43.197 ] 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.197 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.455 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:43.455 "name": "Existed_Raid", 00:14:43.455 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:43.455 "strip_size_kb": 64, 00:14:43.455 "state": "configuring", 00:14:43.455 "raid_level": "concat", 00:14:43.455 "superblock": true, 00:14:43.455 "num_base_bdevs": 4, 00:14:43.455 "num_base_bdevs_discovered": 3, 00:14:43.455 "num_base_bdevs_operational": 4, 00:14:43.455 "base_bdevs_list": [ 00:14:43.455 { 00:14:43.455 "name": "BaseBdev1", 00:14:43.455 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:43.455 "is_configured": true, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 }, 00:14:43.455 { 00:14:43.455 "name": null, 00:14:43.455 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:43.455 "is_configured": false, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 }, 00:14:43.455 { 00:14:43.455 "name": "BaseBdev3", 00:14:43.455 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:43.455 "is_configured": true, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 }, 00:14:43.455 { 00:14:43.455 "name": "BaseBdev4", 00:14:43.455 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:43.455 "is_configured": true, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 } 00:14:43.455 ] 00:14:43.455 }' 00:14:43.455 10:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:43.456 10:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.714 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:43.714 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:44.281 [2024-06-10 10:18:49.816726] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.281 10:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.541 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.541 "name": "Existed_Raid", 00:14:44.541 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:44.541 "strip_size_kb": 64, 00:14:44.541 "state": "configuring", 00:14:44.541 "raid_level": "concat", 00:14:44.541 "superblock": true, 00:14:44.541 "num_base_bdevs": 4, 00:14:44.541 "num_base_bdevs_discovered": 2, 00:14:44.541 "num_base_bdevs_operational": 4, 00:14:44.541 "base_bdevs_list": [ 00:14:44.541 { 00:14:44.541 "name": "BaseBdev1", 00:14:44.541 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:44.541 "is_configured": true, 00:14:44.541 "data_offset": 2048, 00:14:44.541 "data_size": 63488 00:14:44.541 }, 00:14:44.541 { 00:14:44.541 "name": null, 00:14:44.541 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:44.541 "is_configured": false, 00:14:44.541 "data_offset": 2048, 00:14:44.541 "data_size": 63488 00:14:44.541 }, 00:14:44.541 { 00:14:44.541 "name": null, 00:14:44.541 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:44.541 "is_configured": false, 00:14:44.541 "data_offset": 2048, 00:14:44.541 "data_size": 63488 00:14:44.541 }, 00:14:44.541 { 00:14:44.541 "name": "BaseBdev4", 00:14:44.541 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:44.541 "is_configured": true, 00:14:44.541 "data_offset": 2048, 00:14:44.541 "data_size": 63488 00:14:44.541 } 00:14:44.541 ] 00:14:44.541 }' 00:14:44.541 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.541 10:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.107 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.107 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.107 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:45.107 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:45.365 [2024-06-10 10:18:50.924766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.365 10:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.932 10:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.932 "name": "Existed_Raid", 00:14:45.932 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:45.932 "strip_size_kb": 64, 00:14:45.932 "state": "configuring", 00:14:45.932 "raid_level": "concat", 00:14:45.932 "superblock": true, 00:14:45.932 "num_base_bdevs": 4, 00:14:45.932 "num_base_bdevs_discovered": 3, 00:14:45.932 "num_base_bdevs_operational": 4, 00:14:45.932 "base_bdevs_list": [ 00:14:45.932 { 00:14:45.932 "name": "BaseBdev1", 00:14:45.932 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:45.932 "is_configured": true, 00:14:45.932 "data_offset": 2048, 00:14:45.932 "data_size": 63488 00:14:45.932 }, 00:14:45.932 { 00:14:45.932 "name": null, 00:14:45.932 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:45.932 "is_configured": false, 00:14:45.932 "data_offset": 2048, 00:14:45.932 "data_size": 63488 00:14:45.932 }, 00:14:45.932 { 00:14:45.932 "name": "BaseBdev3", 00:14:45.932 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:45.932 "is_configured": true, 00:14:45.932 "data_offset": 2048, 00:14:45.932 "data_size": 63488 00:14:45.932 }, 00:14:45.932 { 00:14:45.932 "name": "BaseBdev4", 00:14:45.932 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:45.932 "is_configured": true, 00:14:45.932 "data_offset": 2048, 00:14:45.932 "data_size": 63488 00:14:45.932 } 00:14:45.932 ] 00:14:45.932 }' 00:14:45.932 10:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.932 10:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.932 10:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.932 10:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.548 10:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:46.548 10:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:46.548 [2024-06-10 10:18:52.116825] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.548 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.111 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.111 "name": "Existed_Raid", 00:14:47.111 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:47.111 "strip_size_kb": 64, 00:14:47.111 "state": "configuring", 00:14:47.111 "raid_level": "concat", 00:14:47.111 "superblock": true, 00:14:47.111 "num_base_bdevs": 4, 00:14:47.111 "num_base_bdevs_discovered": 2, 00:14:47.111 "num_base_bdevs_operational": 4, 00:14:47.111 "base_bdevs_list": [ 00:14:47.111 { 00:14:47.111 "name": null, 00:14:47.111 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:47.111 "is_configured": false, 00:14:47.111 "data_offset": 2048, 00:14:47.111 "data_size": 63488 00:14:47.111 }, 00:14:47.111 { 00:14:47.111 "name": null, 00:14:47.111 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:47.111 "is_configured": false, 00:14:47.111 "data_offset": 2048, 00:14:47.111 "data_size": 63488 00:14:47.111 }, 00:14:47.111 { 00:14:47.111 "name": "BaseBdev3", 00:14:47.111 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:47.111 "is_configured": true, 00:14:47.111 "data_offset": 2048, 00:14:47.111 "data_size": 63488 00:14:47.111 }, 00:14:47.111 { 00:14:47.111 "name": "BaseBdev4", 00:14:47.111 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:47.112 "is_configured": true, 00:14:47.112 "data_offset": 2048, 00:14:47.112 "data_size": 63488 00:14:47.112 } 00:14:47.112 ] 00:14:47.112 }' 00:14:47.112 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.112 10:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.112 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.112 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:47.369 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:47.369 10:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:47.626 [2024-06-10 10:18:53.145627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.626 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.884 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.884 "name": "Existed_Raid", 00:14:47.884 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:47.884 "strip_size_kb": 64, 00:14:47.884 "state": "configuring", 00:14:47.884 "raid_level": "concat", 00:14:47.884 "superblock": true, 00:14:47.884 "num_base_bdevs": 4, 00:14:47.884 "num_base_bdevs_discovered": 3, 00:14:47.884 "num_base_bdevs_operational": 4, 00:14:47.884 "base_bdevs_list": [ 00:14:47.884 { 00:14:47.884 "name": null, 00:14:47.884 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:47.884 "is_configured": false, 00:14:47.884 "data_offset": 2048, 00:14:47.884 "data_size": 63488 00:14:47.884 }, 00:14:47.884 { 00:14:47.884 "name": "BaseBdev2", 00:14:47.884 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:47.884 "is_configured": true, 00:14:47.884 "data_offset": 2048, 00:14:47.884 "data_size": 63488 00:14:47.884 }, 00:14:47.884 { 00:14:47.884 "name": "BaseBdev3", 00:14:47.884 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:47.884 "is_configured": true, 00:14:47.884 "data_offset": 2048, 00:14:47.884 "data_size": 63488 00:14:47.884 }, 00:14:47.884 { 00:14:47.884 "name": "BaseBdev4", 00:14:47.884 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:47.884 "is_configured": true, 00:14:47.884 "data_offset": 2048, 00:14:47.884 "data_size": 63488 00:14:47.884 } 00:14:47.884 ] 00:14:47.884 }' 00:14:47.884 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.884 10:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.448 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.448 10:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.707 10:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:48.707 10:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.707 10:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:48.965 10:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d363dd78-2712-11ef-b084-113036b5c18d 00:14:49.224 [2024-06-10 10:18:54.589797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:49.224 [2024-06-10 10:18:54.589835] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b8bdf00 00:14:49.224 [2024-06-10 10:18:54.589839] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:49.224 [2024-06-10 10:18:54.589860] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b920e20 00:14:49.224 [2024-06-10 10:18:54.589891] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b8bdf00 00:14:49.224 [2024-06-10 10:18:54.589894] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b8bdf00 00:14:49.224 [2024-06-10 10:18:54.589908] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.224 NewBaseBdev 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:14:49.224 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.482 10:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:49.741 [ 00:14:49.741 { 00:14:49.741 "name": "NewBaseBdev", 00:14:49.741 "aliases": [ 00:14:49.741 "d363dd78-2712-11ef-b084-113036b5c18d" 00:14:49.741 ], 00:14:49.741 "product_name": "Malloc disk", 00:14:49.741 "block_size": 512, 00:14:49.741 "num_blocks": 65536, 00:14:49.741 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:49.741 "assigned_rate_limits": { 00:14:49.741 "rw_ios_per_sec": 0, 00:14:49.741 "rw_mbytes_per_sec": 0, 00:14:49.741 "r_mbytes_per_sec": 0, 00:14:49.742 "w_mbytes_per_sec": 0 00:14:49.742 }, 00:14:49.742 "claimed": true, 00:14:49.742 "claim_type": "exclusive_write", 00:14:49.742 "zoned": false, 00:14:49.742 "supported_io_types": { 00:14:49.742 "read": true, 00:14:49.742 "write": true, 00:14:49.742 "unmap": true, 00:14:49.742 "write_zeroes": true, 00:14:49.742 "flush": true, 00:14:49.742 "reset": true, 00:14:49.742 "compare": false, 00:14:49.742 "compare_and_write": false, 00:14:49.742 "abort": true, 00:14:49.742 "nvme_admin": false, 00:14:49.742 "nvme_io": false 00:14:49.742 }, 00:14:49.742 "memory_domains": [ 00:14:49.742 { 00:14:49.742 "dma_device_id": "system", 00:14:49.742 "dma_device_type": 1 00:14:49.742 }, 00:14:49.742 { 00:14:49.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.742 "dma_device_type": 2 00:14:49.742 } 00:14:49.742 ], 00:14:49.742 "driver_specific": {} 00:14:49.742 } 00:14:49.742 ] 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.742 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.001 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.001 "name": "Existed_Raid", 00:14:50.001 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:50.001 "strip_size_kb": 64, 00:14:50.001 "state": "online", 00:14:50.001 "raid_level": "concat", 00:14:50.001 "superblock": true, 00:14:50.001 "num_base_bdevs": 4, 00:14:50.001 "num_base_bdevs_discovered": 4, 00:14:50.001 "num_base_bdevs_operational": 4, 00:14:50.001 "base_bdevs_list": [ 00:14:50.001 { 00:14:50.001 "name": "NewBaseBdev", 00:14:50.001 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 }, 00:14:50.001 { 00:14:50.001 "name": "BaseBdev2", 00:14:50.001 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 }, 00:14:50.001 { 00:14:50.001 "name": "BaseBdev3", 00:14:50.001 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 }, 00:14:50.001 { 00:14:50.001 "name": "BaseBdev4", 00:14:50.001 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:50.001 "is_configured": true, 00:14:50.001 "data_offset": 2048, 00:14:50.001 "data_size": 63488 00:14:50.001 } 00:14:50.001 ] 00:14:50.001 }' 00:14:50.001 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.001 10:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.259 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.259 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:50.259 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:50.259 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:50.259 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:50.260 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:50.260 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:50.260 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:50.518 [2024-06-10 10:18:55.933799] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.518 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:50.518 "name": "Existed_Raid", 00:14:50.518 "aliases": [ 00:14:50.518 "d2303c67-2712-11ef-b084-113036b5c18d" 00:14:50.518 ], 00:14:50.518 "product_name": "Raid Volume", 00:14:50.518 "block_size": 512, 00:14:50.518 "num_blocks": 253952, 00:14:50.518 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:50.518 "assigned_rate_limits": { 00:14:50.518 "rw_ios_per_sec": 0, 00:14:50.518 "rw_mbytes_per_sec": 0, 00:14:50.518 "r_mbytes_per_sec": 0, 00:14:50.518 "w_mbytes_per_sec": 0 00:14:50.518 }, 00:14:50.518 "claimed": false, 00:14:50.518 "zoned": false, 00:14:50.518 "supported_io_types": { 00:14:50.518 "read": true, 00:14:50.518 "write": true, 00:14:50.518 "unmap": true, 00:14:50.518 "write_zeroes": true, 00:14:50.518 "flush": true, 00:14:50.518 "reset": true, 00:14:50.518 "compare": false, 00:14:50.518 "compare_and_write": false, 00:14:50.518 "abort": false, 00:14:50.518 "nvme_admin": false, 00:14:50.518 "nvme_io": false 00:14:50.518 }, 00:14:50.518 "memory_domains": [ 00:14:50.518 { 00:14:50.518 "dma_device_id": "system", 00:14:50.518 "dma_device_type": 1 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.518 "dma_device_type": 2 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "system", 00:14:50.518 "dma_device_type": 1 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.518 "dma_device_type": 2 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "system", 00:14:50.518 "dma_device_type": 1 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.518 "dma_device_type": 2 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "system", 00:14:50.518 "dma_device_type": 1 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.518 "dma_device_type": 2 00:14:50.518 } 00:14:50.518 ], 00:14:50.518 "driver_specific": { 00:14:50.518 "raid": { 00:14:50.518 "uuid": "d2303c67-2712-11ef-b084-113036b5c18d", 00:14:50.518 "strip_size_kb": 64, 00:14:50.518 "state": "online", 00:14:50.518 "raid_level": "concat", 00:14:50.518 "superblock": true, 00:14:50.518 "num_base_bdevs": 4, 00:14:50.518 "num_base_bdevs_discovered": 4, 00:14:50.518 "num_base_bdevs_operational": 4, 00:14:50.518 "base_bdevs_list": [ 00:14:50.518 { 00:14:50.518 "name": "NewBaseBdev", 00:14:50.518 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:50.518 "is_configured": true, 00:14:50.518 "data_offset": 2048, 00:14:50.518 "data_size": 63488 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "name": "BaseBdev2", 00:14:50.518 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:50.518 "is_configured": true, 00:14:50.518 "data_offset": 2048, 00:14:50.518 "data_size": 63488 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "name": "BaseBdev3", 00:14:50.518 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:50.518 "is_configured": true, 00:14:50.518 "data_offset": 2048, 00:14:50.518 "data_size": 63488 00:14:50.518 }, 00:14:50.518 { 00:14:50.518 "name": "BaseBdev4", 00:14:50.518 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:50.518 "is_configured": true, 00:14:50.518 "data_offset": 2048, 00:14:50.518 "data_size": 63488 00:14:50.518 } 00:14:50.518 ] 00:14:50.518 } 00:14:50.518 } 00:14:50.518 }' 00:14:50.518 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.518 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:50.518 BaseBdev2 00:14:50.518 BaseBdev3 00:14:50.518 BaseBdev4' 00:14:50.518 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.518 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:50.518 10:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.777 "name": "NewBaseBdev", 00:14:50.777 "aliases": [ 00:14:50.777 "d363dd78-2712-11ef-b084-113036b5c18d" 00:14:50.777 ], 00:14:50.777 "product_name": "Malloc disk", 00:14:50.777 "block_size": 512, 00:14:50.777 "num_blocks": 65536, 00:14:50.777 "uuid": "d363dd78-2712-11ef-b084-113036b5c18d", 00:14:50.777 "assigned_rate_limits": { 00:14:50.777 "rw_ios_per_sec": 0, 00:14:50.777 "rw_mbytes_per_sec": 0, 00:14:50.777 "r_mbytes_per_sec": 0, 00:14:50.777 "w_mbytes_per_sec": 0 00:14:50.777 }, 00:14:50.777 "claimed": true, 00:14:50.777 "claim_type": "exclusive_write", 00:14:50.777 "zoned": false, 00:14:50.777 "supported_io_types": { 00:14:50.777 "read": true, 00:14:50.777 "write": true, 00:14:50.777 "unmap": true, 00:14:50.777 "write_zeroes": true, 00:14:50.777 "flush": true, 00:14:50.777 "reset": true, 00:14:50.777 "compare": false, 00:14:50.777 "compare_and_write": false, 00:14:50.777 "abort": true, 00:14:50.777 "nvme_admin": false, 00:14:50.777 "nvme_io": false 00:14:50.777 }, 00:14:50.777 "memory_domains": [ 00:14:50.777 { 00:14:50.777 "dma_device_id": "system", 00:14:50.777 "dma_device_type": 1 00:14:50.777 }, 00:14:50.777 { 00:14:50.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.777 "dma_device_type": 2 00:14:50.777 } 00:14:50.777 ], 00:14:50.777 "driver_specific": {} 00:14:50.777 }' 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.777 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.036 "name": "BaseBdev2", 00:14:51.036 "aliases": [ 00:14:51.036 "d0af16fd-2712-11ef-b084-113036b5c18d" 00:14:51.036 ], 00:14:51.036 "product_name": "Malloc disk", 00:14:51.036 "block_size": 512, 00:14:51.036 "num_blocks": 65536, 00:14:51.036 "uuid": "d0af16fd-2712-11ef-b084-113036b5c18d", 00:14:51.036 "assigned_rate_limits": { 00:14:51.036 "rw_ios_per_sec": 0, 00:14:51.036 "rw_mbytes_per_sec": 0, 00:14:51.036 "r_mbytes_per_sec": 0, 00:14:51.036 "w_mbytes_per_sec": 0 00:14:51.036 }, 00:14:51.036 "claimed": true, 00:14:51.036 "claim_type": "exclusive_write", 00:14:51.036 "zoned": false, 00:14:51.036 "supported_io_types": { 00:14:51.036 "read": true, 00:14:51.036 "write": true, 00:14:51.036 "unmap": true, 00:14:51.036 "write_zeroes": true, 00:14:51.036 "flush": true, 00:14:51.036 "reset": true, 00:14:51.036 "compare": false, 00:14:51.036 "compare_and_write": false, 00:14:51.036 "abort": true, 00:14:51.036 "nvme_admin": false, 00:14:51.036 "nvme_io": false 00:14:51.036 }, 00:14:51.036 "memory_domains": [ 00:14:51.036 { 00:14:51.036 "dma_device_id": "system", 00:14:51.036 "dma_device_type": 1 00:14:51.036 }, 00:14:51.036 { 00:14:51.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.036 "dma_device_type": 2 00:14:51.036 } 00:14:51.036 ], 00:14:51.036 "driver_specific": {} 00:14:51.036 }' 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:51.036 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.602 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.602 "name": "BaseBdev3", 00:14:51.602 "aliases": [ 00:14:51.602 "d1338a57-2712-11ef-b084-113036b5c18d" 00:14:51.602 ], 00:14:51.602 "product_name": "Malloc disk", 00:14:51.602 "block_size": 512, 00:14:51.602 "num_blocks": 65536, 00:14:51.602 "uuid": "d1338a57-2712-11ef-b084-113036b5c18d", 00:14:51.602 "assigned_rate_limits": { 00:14:51.602 "rw_ios_per_sec": 0, 00:14:51.602 "rw_mbytes_per_sec": 0, 00:14:51.602 "r_mbytes_per_sec": 0, 00:14:51.603 "w_mbytes_per_sec": 0 00:14:51.603 }, 00:14:51.603 "claimed": true, 00:14:51.603 "claim_type": "exclusive_write", 00:14:51.603 "zoned": false, 00:14:51.603 "supported_io_types": { 00:14:51.603 "read": true, 00:14:51.603 "write": true, 00:14:51.603 "unmap": true, 00:14:51.603 "write_zeroes": true, 00:14:51.603 "flush": true, 00:14:51.603 "reset": true, 00:14:51.603 "compare": false, 00:14:51.603 "compare_and_write": false, 00:14:51.603 "abort": true, 00:14:51.603 "nvme_admin": false, 00:14:51.603 "nvme_io": false 00:14:51.603 }, 00:14:51.603 "memory_domains": [ 00:14:51.603 { 00:14:51.603 "dma_device_id": "system", 00:14:51.603 "dma_device_type": 1 00:14:51.603 }, 00:14:51.603 { 00:14:51.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.603 "dma_device_type": 2 00:14:51.603 } 00:14:51.603 ], 00:14:51.603 "driver_specific": {} 00:14:51.603 }' 00:14:51.603 10:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:51.603 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.862 "name": "BaseBdev4", 00:14:51.862 "aliases": [ 00:14:51.862 "d1beb4d1-2712-11ef-b084-113036b5c18d" 00:14:51.862 ], 00:14:51.862 "product_name": "Malloc disk", 00:14:51.862 "block_size": 512, 00:14:51.862 "num_blocks": 65536, 00:14:51.862 "uuid": "d1beb4d1-2712-11ef-b084-113036b5c18d", 00:14:51.862 "assigned_rate_limits": { 00:14:51.862 "rw_ios_per_sec": 0, 00:14:51.862 "rw_mbytes_per_sec": 0, 00:14:51.862 "r_mbytes_per_sec": 0, 00:14:51.862 "w_mbytes_per_sec": 0 00:14:51.862 }, 00:14:51.862 "claimed": true, 00:14:51.862 "claim_type": "exclusive_write", 00:14:51.862 "zoned": false, 00:14:51.862 "supported_io_types": { 00:14:51.862 "read": true, 00:14:51.862 "write": true, 00:14:51.862 "unmap": true, 00:14:51.862 "write_zeroes": true, 00:14:51.862 "flush": true, 00:14:51.862 "reset": true, 00:14:51.862 "compare": false, 00:14:51.862 "compare_and_write": false, 00:14:51.862 "abort": true, 00:14:51.862 "nvme_admin": false, 00:14:51.862 "nvme_io": false 00:14:51.862 }, 00:14:51.862 "memory_domains": [ 00:14:51.862 { 00:14:51.862 "dma_device_id": "system", 00:14:51.862 "dma_device_type": 1 00:14:51.862 }, 00:14:51.862 { 00:14:51.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.862 "dma_device_type": 2 00:14:51.862 } 00:14:51.862 ], 00:14:51.862 "driver_specific": {} 00:14:51.862 }' 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.862 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:52.120 [2024-06-10 10:18:57.661864] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.120 [2024-06-10 10:18:57.661885] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.121 [2024-06-10 10:18:57.661905] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.121 [2024-06-10 10:18:57.661920] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.121 [2024-06-10 10:18:57.661923] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8bdf00 name Existed_Raid, state offline 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 62296 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 62296 ']' 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 62296 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 62296 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:14:52.121 killing process with pid 62296 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 62296' 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 62296 00:14:52.121 [2024-06-10 10:18:57.689929] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.121 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 62296 00:14:52.121 [2024-06-10 10:18:57.709235] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.380 10:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:52.380 00:14:52.380 real 0m28.200s 00:14:52.380 user 0m51.772s 00:14:52.380 sys 0m3.878s 00:14:52.380 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:52.380 10:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 ************************************ 00:14:52.380 END TEST raid_state_function_test_sb 00:14:52.380 ************************************ 00:14:52.380 10:18:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:52.380 10:18:57 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:14:52.380 10:18:57 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:52.380 10:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 ************************************ 00:14:52.380 START TEST raid_superblock_test 00:14:52.380 ************************************ 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test concat 4 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=63118 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 63118 /var/tmp/spdk-raid.sock 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 63118 ']' 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:52.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:52.380 10:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 [2024-06-10 10:18:57.938460] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:14:52.380 [2024-06-10 10:18:57.938688] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:14:52.948 EAL: TSC is not safe to use in SMP mode 00:14:52.948 EAL: TSC is not invariant 00:14:52.948 [2024-06-10 10:18:58.423699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.948 [2024-06-10 10:18:58.505232] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:52.948 [2024-06-10 10:18:58.507434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.948 [2024-06-10 10:18:58.508138] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.948 [2024-06-10 10:18:58.508151] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:53.513 10:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:53.514 malloc1 00:14:53.514 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:53.791 [2024-06-10 10:18:59.386800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:53.791 [2024-06-10 10:18:59.386850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.791 [2024-06-10 10:18:59.386861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd02780 00:14:53.791 [2024-06-10 10:18:59.386868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.791 [2024-06-10 10:18:59.387594] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.791 [2024-06-10 10:18:59.387626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:53.791 pt1 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.050 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.051 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.051 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:54.308 malloc2 00:14:54.308 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.567 [2024-06-10 10:18:59.942826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.567 [2024-06-10 10:18:59.942876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.567 [2024-06-10 10:18:59.942886] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd02c80 00:14:54.567 [2024-06-10 10:18:59.942894] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.567 [2024-06-10 10:18:59.943365] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.567 [2024-06-10 10:18:59.943390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.567 pt2 00:14:54.567 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:54.567 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:54.567 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:54.567 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:54.567 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:54.568 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:54.568 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:54.568 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:54.568 10:18:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:54.826 malloc3 00:14:54.826 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:55.084 [2024-06-10 10:19:00.458844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:55.084 [2024-06-10 10:19:00.458900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.084 [2024-06-10 10:19:00.458912] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd03180 00:14:55.084 [2024-06-10 10:19:00.458919] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.084 [2024-06-10 10:19:00.459414] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.085 [2024-06-10 10:19:00.459444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:55.085 pt3 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:55.085 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:55.344 malloc4 00:14:55.344 10:19:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:55.603 [2024-06-10 10:19:01.026876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:55.603 [2024-06-10 10:19:01.026927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.603 [2024-06-10 10:19:01.026938] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd03680 00:14:55.603 [2024-06-10 10:19:01.026945] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.603 [2024-06-10 10:19:01.027428] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.603 [2024-06-10 10:19:01.027456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:55.603 pt4 00:14:55.603 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:55.603 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:55.603 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:14:55.861 [2024-06-10 10:19:01.234885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.862 [2024-06-10 10:19:01.235308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:55.862 [2024-06-10 10:19:01.235323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:55.862 [2024-06-10 10:19:01.235333] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:55.862 [2024-06-10 10:19:01.235379] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82dd03900 00:14:55.862 [2024-06-10 10:19:01.235384] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:55.862 [2024-06-10 10:19:01.235412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82dd65e20 00:14:55.862 [2024-06-10 10:19:01.235467] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82dd03900 00:14:55.862 [2024-06-10 10:19:01.235471] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82dd03900 00:14:55.862 [2024-06-10 10:19:01.235490] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.862 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.120 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.120 "name": "raid_bdev1", 00:14:56.120 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:14:56.120 "strip_size_kb": 64, 00:14:56.120 "state": "online", 00:14:56.120 "raid_level": "concat", 00:14:56.120 "superblock": true, 00:14:56.120 "num_base_bdevs": 4, 00:14:56.120 "num_base_bdevs_discovered": 4, 00:14:56.120 "num_base_bdevs_operational": 4, 00:14:56.120 "base_bdevs_list": [ 00:14:56.121 { 00:14:56.121 "name": "pt1", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.121 "is_configured": true, 00:14:56.121 "data_offset": 2048, 00:14:56.121 "data_size": 63488 00:14:56.121 }, 00:14:56.121 { 00:14:56.121 "name": "pt2", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.121 "is_configured": true, 00:14:56.121 "data_offset": 2048, 00:14:56.121 "data_size": 63488 00:14:56.121 }, 00:14:56.121 { 00:14:56.121 "name": "pt3", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.121 "is_configured": true, 00:14:56.121 "data_offset": 2048, 00:14:56.121 "data_size": 63488 00:14:56.121 }, 00:14:56.121 { 00:14:56.121 "name": "pt4", 00:14:56.121 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.121 "is_configured": true, 00:14:56.121 "data_offset": 2048, 00:14:56.121 "data_size": 63488 00:14:56.121 } 00:14:56.121 ] 00:14:56.121 }' 00:14:56.121 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.121 10:19:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:56.399 10:19:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:56.658 [2024-06-10 10:19:02.202942] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.658 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:56.658 "name": "raid_bdev1", 00:14:56.658 "aliases": [ 00:14:56.658 "db34ef71-2712-11ef-b084-113036b5c18d" 00:14:56.658 ], 00:14:56.658 "product_name": "Raid Volume", 00:14:56.658 "block_size": 512, 00:14:56.658 "num_blocks": 253952, 00:14:56.658 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:14:56.658 "assigned_rate_limits": { 00:14:56.658 "rw_ios_per_sec": 0, 00:14:56.658 "rw_mbytes_per_sec": 0, 00:14:56.658 "r_mbytes_per_sec": 0, 00:14:56.658 "w_mbytes_per_sec": 0 00:14:56.658 }, 00:14:56.658 "claimed": false, 00:14:56.658 "zoned": false, 00:14:56.658 "supported_io_types": { 00:14:56.658 "read": true, 00:14:56.658 "write": true, 00:14:56.658 "unmap": true, 00:14:56.658 "write_zeroes": true, 00:14:56.658 "flush": true, 00:14:56.658 "reset": true, 00:14:56.658 "compare": false, 00:14:56.658 "compare_and_write": false, 00:14:56.658 "abort": false, 00:14:56.658 "nvme_admin": false, 00:14:56.658 "nvme_io": false 00:14:56.658 }, 00:14:56.658 "memory_domains": [ 00:14:56.658 { 00:14:56.658 "dma_device_id": "system", 00:14:56.658 "dma_device_type": 1 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.658 "dma_device_type": 2 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "system", 00:14:56.658 "dma_device_type": 1 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.658 "dma_device_type": 2 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "system", 00:14:56.658 "dma_device_type": 1 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.658 "dma_device_type": 2 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "system", 00:14:56.658 "dma_device_type": 1 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.658 "dma_device_type": 2 00:14:56.658 } 00:14:56.658 ], 00:14:56.658 "driver_specific": { 00:14:56.658 "raid": { 00:14:56.658 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:14:56.658 "strip_size_kb": 64, 00:14:56.658 "state": "online", 00:14:56.658 "raid_level": "concat", 00:14:56.658 "superblock": true, 00:14:56.658 "num_base_bdevs": 4, 00:14:56.658 "num_base_bdevs_discovered": 4, 00:14:56.658 "num_base_bdevs_operational": 4, 00:14:56.658 "base_bdevs_list": [ 00:14:56.658 { 00:14:56.658 "name": "pt1", 00:14:56.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.658 "is_configured": true, 00:14:56.658 "data_offset": 2048, 00:14:56.658 "data_size": 63488 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "name": "pt2", 00:14:56.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.658 "is_configured": true, 00:14:56.658 "data_offset": 2048, 00:14:56.658 "data_size": 63488 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "name": "pt3", 00:14:56.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.658 "is_configured": true, 00:14:56.658 "data_offset": 2048, 00:14:56.658 "data_size": 63488 00:14:56.658 }, 00:14:56.658 { 00:14:56.658 "name": "pt4", 00:14:56.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.658 "is_configured": true, 00:14:56.658 "data_offset": 2048, 00:14:56.658 "data_size": 63488 00:14:56.658 } 00:14:56.658 ] 00:14:56.658 } 00:14:56.658 } 00:14:56.658 }' 00:14:56.658 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.658 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:56.658 pt2 00:14:56.658 pt3 00:14:56.658 pt4' 00:14:56.658 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.658 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:56.658 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.917 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.917 "name": "pt1", 00:14:56.917 "aliases": [ 00:14:56.917 "00000000-0000-0000-0000-000000000001" 00:14:56.917 ], 00:14:56.917 "product_name": "passthru", 00:14:56.917 "block_size": 512, 00:14:56.917 "num_blocks": 65536, 00:14:56.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.917 "assigned_rate_limits": { 00:14:56.917 "rw_ios_per_sec": 0, 00:14:56.917 "rw_mbytes_per_sec": 0, 00:14:56.917 "r_mbytes_per_sec": 0, 00:14:56.917 "w_mbytes_per_sec": 0 00:14:56.917 }, 00:14:56.917 "claimed": true, 00:14:56.917 "claim_type": "exclusive_write", 00:14:56.917 "zoned": false, 00:14:56.917 "supported_io_types": { 00:14:56.917 "read": true, 00:14:56.917 "write": true, 00:14:56.917 "unmap": true, 00:14:56.917 "write_zeroes": true, 00:14:56.917 "flush": true, 00:14:56.917 "reset": true, 00:14:56.917 "compare": false, 00:14:56.917 "compare_and_write": false, 00:14:56.917 "abort": true, 00:14:56.917 "nvme_admin": false, 00:14:56.917 "nvme_io": false 00:14:56.917 }, 00:14:56.917 "memory_domains": [ 00:14:56.917 { 00:14:56.917 "dma_device_id": "system", 00:14:56.917 "dma_device_type": 1 00:14:56.917 }, 00:14:56.917 { 00:14:56.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.917 "dma_device_type": 2 00:14:56.917 } 00:14:56.917 ], 00:14:56.917 "driver_specific": { 00:14:56.917 "passthru": { 00:14:56.917 "name": "pt1", 00:14:56.917 "base_bdev_name": "malloc1" 00:14:56.917 } 00:14:56.917 } 00:14:56.917 }' 00:14:56.917 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:57.175 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.433 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.433 "name": "pt2", 00:14:57.433 "aliases": [ 00:14:57.434 "00000000-0000-0000-0000-000000000002" 00:14:57.434 ], 00:14:57.434 "product_name": "passthru", 00:14:57.434 "block_size": 512, 00:14:57.434 "num_blocks": 65536, 00:14:57.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.434 "assigned_rate_limits": { 00:14:57.434 "rw_ios_per_sec": 0, 00:14:57.434 "rw_mbytes_per_sec": 0, 00:14:57.434 "r_mbytes_per_sec": 0, 00:14:57.434 "w_mbytes_per_sec": 0 00:14:57.434 }, 00:14:57.434 "claimed": true, 00:14:57.434 "claim_type": "exclusive_write", 00:14:57.434 "zoned": false, 00:14:57.434 "supported_io_types": { 00:14:57.434 "read": true, 00:14:57.434 "write": true, 00:14:57.434 "unmap": true, 00:14:57.434 "write_zeroes": true, 00:14:57.434 "flush": true, 00:14:57.434 "reset": true, 00:14:57.434 "compare": false, 00:14:57.434 "compare_and_write": false, 00:14:57.434 "abort": true, 00:14:57.434 "nvme_admin": false, 00:14:57.434 "nvme_io": false 00:14:57.434 }, 00:14:57.434 "memory_domains": [ 00:14:57.434 { 00:14:57.434 "dma_device_id": "system", 00:14:57.434 "dma_device_type": 1 00:14:57.434 }, 00:14:57.434 { 00:14:57.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.434 "dma_device_type": 2 00:14:57.434 } 00:14:57.434 ], 00:14:57.434 "driver_specific": { 00:14:57.434 "passthru": { 00:14:57.434 "name": "pt2", 00:14:57.434 "base_bdev_name": "malloc2" 00:14:57.434 } 00:14:57.434 } 00:14:57.434 }' 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:57.434 10:19:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:57.692 "name": "pt3", 00:14:57.692 "aliases": [ 00:14:57.692 "00000000-0000-0000-0000-000000000003" 00:14:57.692 ], 00:14:57.692 "product_name": "passthru", 00:14:57.692 "block_size": 512, 00:14:57.692 "num_blocks": 65536, 00:14:57.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.692 "assigned_rate_limits": { 00:14:57.692 "rw_ios_per_sec": 0, 00:14:57.692 "rw_mbytes_per_sec": 0, 00:14:57.692 "r_mbytes_per_sec": 0, 00:14:57.692 "w_mbytes_per_sec": 0 00:14:57.692 }, 00:14:57.692 "claimed": true, 00:14:57.692 "claim_type": "exclusive_write", 00:14:57.692 "zoned": false, 00:14:57.692 "supported_io_types": { 00:14:57.692 "read": true, 00:14:57.692 "write": true, 00:14:57.692 "unmap": true, 00:14:57.692 "write_zeroes": true, 00:14:57.692 "flush": true, 00:14:57.692 "reset": true, 00:14:57.692 "compare": false, 00:14:57.692 "compare_and_write": false, 00:14:57.692 "abort": true, 00:14:57.692 "nvme_admin": false, 00:14:57.692 "nvme_io": false 00:14:57.692 }, 00:14:57.692 "memory_domains": [ 00:14:57.692 { 00:14:57.692 "dma_device_id": "system", 00:14:57.692 "dma_device_type": 1 00:14:57.692 }, 00:14:57.692 { 00:14:57.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.692 "dma_device_type": 2 00:14:57.692 } 00:14:57.692 ], 00:14:57.692 "driver_specific": { 00:14:57.692 "passthru": { 00:14:57.692 "name": "pt3", 00:14:57.692 "base_bdev_name": "malloc3" 00:14:57.692 } 00:14:57.692 } 00:14:57.692 }' 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:57.692 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.950 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:14:57.951 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:58.209 "name": "pt4", 00:14:58.209 "aliases": [ 00:14:58.209 "00000000-0000-0000-0000-000000000004" 00:14:58.209 ], 00:14:58.209 "product_name": "passthru", 00:14:58.209 "block_size": 512, 00:14:58.209 "num_blocks": 65536, 00:14:58.209 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.209 "assigned_rate_limits": { 00:14:58.209 "rw_ios_per_sec": 0, 00:14:58.209 "rw_mbytes_per_sec": 0, 00:14:58.209 "r_mbytes_per_sec": 0, 00:14:58.209 "w_mbytes_per_sec": 0 00:14:58.209 }, 00:14:58.209 "claimed": true, 00:14:58.209 "claim_type": "exclusive_write", 00:14:58.209 "zoned": false, 00:14:58.209 "supported_io_types": { 00:14:58.209 "read": true, 00:14:58.209 "write": true, 00:14:58.209 "unmap": true, 00:14:58.209 "write_zeroes": true, 00:14:58.209 "flush": true, 00:14:58.209 "reset": true, 00:14:58.209 "compare": false, 00:14:58.209 "compare_and_write": false, 00:14:58.209 "abort": true, 00:14:58.209 "nvme_admin": false, 00:14:58.209 "nvme_io": false 00:14:58.209 }, 00:14:58.209 "memory_domains": [ 00:14:58.209 { 00:14:58.209 "dma_device_id": "system", 00:14:58.209 "dma_device_type": 1 00:14:58.209 }, 00:14:58.209 { 00:14:58.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.209 "dma_device_type": 2 00:14:58.209 } 00:14:58.209 ], 00:14:58.209 "driver_specific": { 00:14:58.209 "passthru": { 00:14:58.209 "name": "pt4", 00:14:58.209 "base_bdev_name": "malloc4" 00:14:58.209 } 00:14:58.209 } 00:14:58.209 }' 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:58.209 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:58.467 [2024-06-10 10:19:03.923027] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.467 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=db34ef71-2712-11ef-b084-113036b5c18d 00:14:58.467 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z db34ef71-2712-11ef-b084-113036b5c18d ']' 00:14:58.467 10:19:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:58.726 [2024-06-10 10:19:04.179007] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.726 [2024-06-10 10:19:04.179024] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.726 [2024-06-10 10:19:04.179046] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.726 [2024-06-10 10:19:04.179060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.726 [2024-06-10 10:19:04.179064] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82dd03900 name raid_bdev1, state offline 00:14:58.726 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.726 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:59.036 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:59.036 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:59.036 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.036 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:59.294 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.294 10:19:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:59.554 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.554 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:59.812 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:59.812 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:00.071 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:00.071 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:00.329 10:19:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:00.586 [2024-06-10 10:19:06.163099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:00.586 [2024-06-10 10:19:06.163561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:00.586 [2024-06-10 10:19:06.163579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:00.586 [2024-06-10 10:19:06.163586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:00.586 [2024-06-10 10:19:06.163599] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:00.586 [2024-06-10 10:19:06.163634] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:00.586 [2024-06-10 10:19:06.163645] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:00.586 [2024-06-10 10:19:06.163653] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:00.586 [2024-06-10 10:19:06.163661] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.586 [2024-06-10 10:19:06.163665] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82dd03680 name raid_bdev1, state configuring 00:15:00.586 request: 00:15:00.586 { 00:15:00.586 "name": "raid_bdev1", 00:15:00.586 "raid_level": "concat", 00:15:00.586 "base_bdevs": [ 00:15:00.586 "malloc1", 00:15:00.586 "malloc2", 00:15:00.586 "malloc3", 00:15:00.586 "malloc4" 00:15:00.586 ], 00:15:00.586 "superblock": false, 00:15:00.586 "strip_size_kb": 64, 00:15:00.586 "method": "bdev_raid_create", 00:15:00.586 "req_id": 1 00:15:00.586 } 00:15:00.586 Got JSON-RPC error response 00:15:00.586 response: 00:15:00.586 { 00:15:00.586 "code": -17, 00:15:00.586 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:00.586 } 00:15:00.586 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:15:00.586 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:00.586 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:00.586 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:00.586 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.586 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:00.908 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:00.908 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:00.908 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.168 [2024-06-10 10:19:06.723094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.168 [2024-06-10 10:19:06.723135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.168 [2024-06-10 10:19:06.723144] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd03180 00:15:01.168 [2024-06-10 10:19:06.723150] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.168 [2024-06-10 10:19:06.723632] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.168 [2024-06-10 10:19:06.723655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.168 [2024-06-10 10:19:06.723674] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:01.168 [2024-06-10 10:19:06.723683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:01.168 pt1 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.168 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.427 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.427 "name": "raid_bdev1", 00:15:01.427 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:15:01.427 "strip_size_kb": 64, 00:15:01.427 "state": "configuring", 00:15:01.427 "raid_level": "concat", 00:15:01.427 "superblock": true, 00:15:01.427 "num_base_bdevs": 4, 00:15:01.427 "num_base_bdevs_discovered": 1, 00:15:01.427 "num_base_bdevs_operational": 4, 00:15:01.427 "base_bdevs_list": [ 00:15:01.427 { 00:15:01.427 "name": "pt1", 00:15:01.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.427 "is_configured": true, 00:15:01.427 "data_offset": 2048, 00:15:01.427 "data_size": 63488 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "name": null, 00:15:01.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.427 "is_configured": false, 00:15:01.427 "data_offset": 2048, 00:15:01.427 "data_size": 63488 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "name": null, 00:15:01.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.427 "is_configured": false, 00:15:01.427 "data_offset": 2048, 00:15:01.427 "data_size": 63488 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "name": null, 00:15:01.427 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.427 "is_configured": false, 00:15:01.427 "data_offset": 2048, 00:15:01.427 "data_size": 63488 00:15:01.427 } 00:15:01.427 ] 00:15:01.427 }' 00:15:01.427 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.427 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.992 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:01.992 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.992 [2024-06-10 10:19:07.575132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.992 [2024-06-10 10:19:07.575176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.992 [2024-06-10 10:19:07.575186] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd02780 00:15:01.992 [2024-06-10 10:19:07.575193] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.992 [2024-06-10 10:19:07.575271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.992 [2024-06-10 10:19:07.575280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.992 [2024-06-10 10:19:07.575296] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:01.992 [2024-06-10 10:19:07.575303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:01.992 pt2 00:15:01.992 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:02.557 [2024-06-10 10:19:07.899137] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.557 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.815 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.815 "name": "raid_bdev1", 00:15:02.815 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:15:02.815 "strip_size_kb": 64, 00:15:02.815 "state": "configuring", 00:15:02.815 "raid_level": "concat", 00:15:02.815 "superblock": true, 00:15:02.815 "num_base_bdevs": 4, 00:15:02.815 "num_base_bdevs_discovered": 1, 00:15:02.815 "num_base_bdevs_operational": 4, 00:15:02.815 "base_bdevs_list": [ 00:15:02.815 { 00:15:02.815 "name": "pt1", 00:15:02.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:02.815 "is_configured": true, 00:15:02.815 "data_offset": 2048, 00:15:02.815 "data_size": 63488 00:15:02.815 }, 00:15:02.815 { 00:15:02.815 "name": null, 00:15:02.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:02.815 "is_configured": false, 00:15:02.815 "data_offset": 2048, 00:15:02.815 "data_size": 63488 00:15:02.815 }, 00:15:02.815 { 00:15:02.815 "name": null, 00:15:02.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.815 "is_configured": false, 00:15:02.815 "data_offset": 2048, 00:15:02.815 "data_size": 63488 00:15:02.816 }, 00:15:02.816 { 00:15:02.816 "name": null, 00:15:02.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.816 "is_configured": false, 00:15:02.816 "data_offset": 2048, 00:15:02.816 "data_size": 63488 00:15:02.816 } 00:15:02.816 ] 00:15:02.816 }' 00:15:02.816 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.816 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.074 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:03.074 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:03.074 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.332 [2024-06-10 10:19:08.811204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.332 [2024-06-10 10:19:08.811257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.332 [2024-06-10 10:19:08.811267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd02780 00:15:03.332 [2024-06-10 10:19:08.811274] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.332 [2024-06-10 10:19:08.811361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.332 [2024-06-10 10:19:08.811370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.332 [2024-06-10 10:19:08.811386] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:03.332 [2024-06-10 10:19:08.811394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.332 pt2 00:15:03.332 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:03.332 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:03.332 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.591 [2024-06-10 10:19:09.019212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.591 [2024-06-10 10:19:09.019257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.591 [2024-06-10 10:19:09.019268] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd03b80 00:15:03.591 [2024-06-10 10:19:09.019275] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.591 [2024-06-10 10:19:09.019348] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.591 [2024-06-10 10:19:09.019357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.591 [2024-06-10 10:19:09.019374] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:03.591 [2024-06-10 10:19:09.019381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.591 pt3 00:15:03.591 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:03.591 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:03.591 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:03.850 [2024-06-10 10:19:09.311229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:03.850 [2024-06-10 10:19:09.311275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.850 [2024-06-10 10:19:09.311285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dd03900 00:15:03.850 [2024-06-10 10:19:09.311292] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.850 [2024-06-10 10:19:09.311361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.850 [2024-06-10 10:19:09.311369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:03.850 [2024-06-10 10:19:09.311386] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:03.850 [2024-06-10 10:19:09.311393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:03.850 [2024-06-10 10:19:09.311414] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82dd02c80 00:15:03.850 [2024-06-10 10:19:09.311418] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:03.850 [2024-06-10 10:19:09.311436] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82dd65e20 00:15:03.850 [2024-06-10 10:19:09.311475] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82dd02c80 00:15:03.850 [2024-06-10 10:19:09.311479] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82dd02c80 00:15:03.850 [2024-06-10 10:19:09.311495] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.850 pt4 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.850 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.109 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.109 "name": "raid_bdev1", 00:15:04.109 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:15:04.109 "strip_size_kb": 64, 00:15:04.109 "state": "online", 00:15:04.109 "raid_level": "concat", 00:15:04.109 "superblock": true, 00:15:04.109 "num_base_bdevs": 4, 00:15:04.109 "num_base_bdevs_discovered": 4, 00:15:04.109 "num_base_bdevs_operational": 4, 00:15:04.109 "base_bdevs_list": [ 00:15:04.109 { 00:15:04.109 "name": "pt1", 00:15:04.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.109 "is_configured": true, 00:15:04.109 "data_offset": 2048, 00:15:04.109 "data_size": 63488 00:15:04.109 }, 00:15:04.109 { 00:15:04.109 "name": "pt2", 00:15:04.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.109 "is_configured": true, 00:15:04.109 "data_offset": 2048, 00:15:04.109 "data_size": 63488 00:15:04.109 }, 00:15:04.109 { 00:15:04.109 "name": "pt3", 00:15:04.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.109 "is_configured": true, 00:15:04.109 "data_offset": 2048, 00:15:04.109 "data_size": 63488 00:15:04.109 }, 00:15:04.109 { 00:15:04.109 "name": "pt4", 00:15:04.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.109 "is_configured": true, 00:15:04.109 "data_offset": 2048, 00:15:04.109 "data_size": 63488 00:15:04.109 } 00:15:04.109 ] 00:15:04.109 }' 00:15:04.109 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.109 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:04.417 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:04.677 [2024-06-10 10:19:10.175298] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.677 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:04.677 "name": "raid_bdev1", 00:15:04.677 "aliases": [ 00:15:04.677 "db34ef71-2712-11ef-b084-113036b5c18d" 00:15:04.677 ], 00:15:04.677 "product_name": "Raid Volume", 00:15:04.677 "block_size": 512, 00:15:04.677 "num_blocks": 253952, 00:15:04.677 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:15:04.677 "assigned_rate_limits": { 00:15:04.677 "rw_ios_per_sec": 0, 00:15:04.677 "rw_mbytes_per_sec": 0, 00:15:04.677 "r_mbytes_per_sec": 0, 00:15:04.677 "w_mbytes_per_sec": 0 00:15:04.677 }, 00:15:04.677 "claimed": false, 00:15:04.677 "zoned": false, 00:15:04.677 "supported_io_types": { 00:15:04.677 "read": true, 00:15:04.677 "write": true, 00:15:04.677 "unmap": true, 00:15:04.677 "write_zeroes": true, 00:15:04.677 "flush": true, 00:15:04.677 "reset": true, 00:15:04.677 "compare": false, 00:15:04.677 "compare_and_write": false, 00:15:04.677 "abort": false, 00:15:04.677 "nvme_admin": false, 00:15:04.677 "nvme_io": false 00:15:04.677 }, 00:15:04.677 "memory_domains": [ 00:15:04.677 { 00:15:04.677 "dma_device_id": "system", 00:15:04.677 "dma_device_type": 1 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.677 "dma_device_type": 2 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "system", 00:15:04.677 "dma_device_type": 1 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.677 "dma_device_type": 2 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "system", 00:15:04.677 "dma_device_type": 1 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.677 "dma_device_type": 2 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "system", 00:15:04.677 "dma_device_type": 1 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.677 "dma_device_type": 2 00:15:04.677 } 00:15:04.677 ], 00:15:04.677 "driver_specific": { 00:15:04.677 "raid": { 00:15:04.677 "uuid": "db34ef71-2712-11ef-b084-113036b5c18d", 00:15:04.677 "strip_size_kb": 64, 00:15:04.677 "state": "online", 00:15:04.677 "raid_level": "concat", 00:15:04.677 "superblock": true, 00:15:04.677 "num_base_bdevs": 4, 00:15:04.677 "num_base_bdevs_discovered": 4, 00:15:04.677 "num_base_bdevs_operational": 4, 00:15:04.677 "base_bdevs_list": [ 00:15:04.677 { 00:15:04.677 "name": "pt1", 00:15:04.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.677 "is_configured": true, 00:15:04.677 "data_offset": 2048, 00:15:04.677 "data_size": 63488 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "name": "pt2", 00:15:04.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.677 "is_configured": true, 00:15:04.677 "data_offset": 2048, 00:15:04.677 "data_size": 63488 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "name": "pt3", 00:15:04.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.677 "is_configured": true, 00:15:04.677 "data_offset": 2048, 00:15:04.677 "data_size": 63488 00:15:04.677 }, 00:15:04.677 { 00:15:04.677 "name": "pt4", 00:15:04.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.677 "is_configured": true, 00:15:04.677 "data_offset": 2048, 00:15:04.677 "data_size": 63488 00:15:04.677 } 00:15:04.677 ] 00:15:04.677 } 00:15:04.677 } 00:15:04.677 }' 00:15:04.677 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.677 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:04.677 pt2 00:15:04.677 pt3 00:15:04.677 pt4' 00:15:04.677 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.677 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:04.677 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.935 "name": "pt1", 00:15:04.935 "aliases": [ 00:15:04.935 "00000000-0000-0000-0000-000000000001" 00:15:04.935 ], 00:15:04.935 "product_name": "passthru", 00:15:04.935 "block_size": 512, 00:15:04.935 "num_blocks": 65536, 00:15:04.935 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.935 "assigned_rate_limits": { 00:15:04.935 "rw_ios_per_sec": 0, 00:15:04.935 "rw_mbytes_per_sec": 0, 00:15:04.935 "r_mbytes_per_sec": 0, 00:15:04.935 "w_mbytes_per_sec": 0 00:15:04.935 }, 00:15:04.935 "claimed": true, 00:15:04.935 "claim_type": "exclusive_write", 00:15:04.935 "zoned": false, 00:15:04.935 "supported_io_types": { 00:15:04.935 "read": true, 00:15:04.935 "write": true, 00:15:04.935 "unmap": true, 00:15:04.935 "write_zeroes": true, 00:15:04.935 "flush": true, 00:15:04.935 "reset": true, 00:15:04.935 "compare": false, 00:15:04.935 "compare_and_write": false, 00:15:04.935 "abort": true, 00:15:04.935 "nvme_admin": false, 00:15:04.935 "nvme_io": false 00:15:04.935 }, 00:15:04.935 "memory_domains": [ 00:15:04.935 { 00:15:04.935 "dma_device_id": "system", 00:15:04.935 "dma_device_type": 1 00:15:04.935 }, 00:15:04.935 { 00:15:04.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.935 "dma_device_type": 2 00:15:04.935 } 00:15:04.935 ], 00:15:04.935 "driver_specific": { 00:15:04.935 "passthru": { 00:15:04.935 "name": "pt1", 00:15:04.935 "base_bdev_name": "malloc1" 00:15:04.935 } 00:15:04.935 } 00:15:04.935 }' 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:04.935 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.194 "name": "pt2", 00:15:05.194 "aliases": [ 00:15:05.194 "00000000-0000-0000-0000-000000000002" 00:15:05.194 ], 00:15:05.194 "product_name": "passthru", 00:15:05.194 "block_size": 512, 00:15:05.194 "num_blocks": 65536, 00:15:05.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.194 "assigned_rate_limits": { 00:15:05.194 "rw_ios_per_sec": 0, 00:15:05.194 "rw_mbytes_per_sec": 0, 00:15:05.194 "r_mbytes_per_sec": 0, 00:15:05.194 "w_mbytes_per_sec": 0 00:15:05.194 }, 00:15:05.194 "claimed": true, 00:15:05.194 "claim_type": "exclusive_write", 00:15:05.194 "zoned": false, 00:15:05.194 "supported_io_types": { 00:15:05.194 "read": true, 00:15:05.194 "write": true, 00:15:05.194 "unmap": true, 00:15:05.194 "write_zeroes": true, 00:15:05.194 "flush": true, 00:15:05.194 "reset": true, 00:15:05.194 "compare": false, 00:15:05.194 "compare_and_write": false, 00:15:05.194 "abort": true, 00:15:05.194 "nvme_admin": false, 00:15:05.194 "nvme_io": false 00:15:05.194 }, 00:15:05.194 "memory_domains": [ 00:15:05.194 { 00:15:05.194 "dma_device_id": "system", 00:15:05.194 "dma_device_type": 1 00:15:05.194 }, 00:15:05.194 { 00:15:05.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.194 "dma_device_type": 2 00:15:05.194 } 00:15:05.194 ], 00:15:05.194 "driver_specific": { 00:15:05.194 "passthru": { 00:15:05.194 "name": "pt2", 00:15:05.194 "base_bdev_name": "malloc2" 00:15:05.194 } 00:15:05.194 } 00:15:05.194 }' 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.194 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.453 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.712 "name": "pt3", 00:15:05.712 "aliases": [ 00:15:05.712 "00000000-0000-0000-0000-000000000003" 00:15:05.712 ], 00:15:05.712 "product_name": "passthru", 00:15:05.712 "block_size": 512, 00:15:05.712 "num_blocks": 65536, 00:15:05.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.712 "assigned_rate_limits": { 00:15:05.712 "rw_ios_per_sec": 0, 00:15:05.712 "rw_mbytes_per_sec": 0, 00:15:05.712 "r_mbytes_per_sec": 0, 00:15:05.712 "w_mbytes_per_sec": 0 00:15:05.712 }, 00:15:05.712 "claimed": true, 00:15:05.712 "claim_type": "exclusive_write", 00:15:05.712 "zoned": false, 00:15:05.712 "supported_io_types": { 00:15:05.712 "read": true, 00:15:05.712 "write": true, 00:15:05.712 "unmap": true, 00:15:05.712 "write_zeroes": true, 00:15:05.712 "flush": true, 00:15:05.712 "reset": true, 00:15:05.712 "compare": false, 00:15:05.712 "compare_and_write": false, 00:15:05.712 "abort": true, 00:15:05.712 "nvme_admin": false, 00:15:05.712 "nvme_io": false 00:15:05.712 }, 00:15:05.712 "memory_domains": [ 00:15:05.712 { 00:15:05.712 "dma_device_id": "system", 00:15:05.712 "dma_device_type": 1 00:15:05.712 }, 00:15:05.712 { 00:15:05.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.712 "dma_device_type": 2 00:15:05.712 } 00:15:05.712 ], 00:15:05.712 "driver_specific": { 00:15:05.712 "passthru": { 00:15:05.712 "name": "pt3", 00:15:05.712 "base_bdev_name": "malloc3" 00:15:05.712 } 00:15:05.712 } 00:15:05.712 }' 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:05.712 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.971 "name": "pt4", 00:15:05.971 "aliases": [ 00:15:05.971 "00000000-0000-0000-0000-000000000004" 00:15:05.971 ], 00:15:05.971 "product_name": "passthru", 00:15:05.971 "block_size": 512, 00:15:05.971 "num_blocks": 65536, 00:15:05.971 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.971 "assigned_rate_limits": { 00:15:05.971 "rw_ios_per_sec": 0, 00:15:05.971 "rw_mbytes_per_sec": 0, 00:15:05.971 "r_mbytes_per_sec": 0, 00:15:05.971 "w_mbytes_per_sec": 0 00:15:05.971 }, 00:15:05.971 "claimed": true, 00:15:05.971 "claim_type": "exclusive_write", 00:15:05.971 "zoned": false, 00:15:05.971 "supported_io_types": { 00:15:05.971 "read": true, 00:15:05.971 "write": true, 00:15:05.971 "unmap": true, 00:15:05.971 "write_zeroes": true, 00:15:05.971 "flush": true, 00:15:05.971 "reset": true, 00:15:05.971 "compare": false, 00:15:05.971 "compare_and_write": false, 00:15:05.971 "abort": true, 00:15:05.971 "nvme_admin": false, 00:15:05.971 "nvme_io": false 00:15:05.971 }, 00:15:05.971 "memory_domains": [ 00:15:05.971 { 00:15:05.971 "dma_device_id": "system", 00:15:05.971 "dma_device_type": 1 00:15:05.971 }, 00:15:05.971 { 00:15:05.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.971 "dma_device_type": 2 00:15:05.971 } 00:15:05.971 ], 00:15:05.971 "driver_specific": { 00:15:05.971 "passthru": { 00:15:05.971 "name": "pt4", 00:15:05.971 "base_bdev_name": "malloc4" 00:15:05.971 } 00:15:05.971 } 00:15:05.971 }' 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.971 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.972 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.972 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.972 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.972 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:05.972 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:06.230 [2024-06-10 10:19:11.707352] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' db34ef71-2712-11ef-b084-113036b5c18d '!=' db34ef71-2712-11ef-b084-113036b5c18d ']' 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 63118 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 63118 ']' 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 63118 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 63118 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:15:06.230 killing process with pid 63118 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63118' 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 63118 00:15:06.230 [2024-06-10 10:19:11.738902] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.230 [2024-06-10 10:19:11.738921] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.230 [2024-06-10 10:19:11.738946] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.230 [2024-06-10 10:19:11.738951] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82dd02c80 name raid_bdev1, state offline 00:15:06.230 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 63118 00:15:06.230 [2024-06-10 10:19:11.758045] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.489 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:06.489 00:15:06.489 real 0m13.992s 00:15:06.489 user 0m25.100s 00:15:06.489 sys 0m2.145s 00:15:06.489 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:06.489 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.489 ************************************ 00:15:06.489 END TEST raid_superblock_test 00:15:06.489 ************************************ 00:15:06.489 10:19:11 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:06.489 10:19:11 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:06.489 10:19:11 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:06.489 10:19:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.489 ************************************ 00:15:06.489 START TEST raid_read_error_test 00:15:06.489 ************************************ 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 4 read 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BL6NmFnb 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=63519 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 63519 /var/tmp/spdk-raid.sock 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 63519 ']' 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:06.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:06.489 10:19:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.489 [2024-06-10 10:19:11.978593] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:06.489 [2024-06-10 10:19:11.978851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:07.094 EAL: TSC is not safe to use in SMP mode 00:15:07.094 EAL: TSC is not invariant 00:15:07.094 [2024-06-10 10:19:12.444174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.094 [2024-06-10 10:19:12.523517] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:07.094 [2024-06-10 10:19:12.525622] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.095 [2024-06-10 10:19:12.526291] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.095 [2024-06-10 10:19:12.526302] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.661 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:07.661 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:15:07.661 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:07.661 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:07.661 BaseBdev1_malloc 00:15:07.661 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:07.919 true 00:15:07.919 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:08.178 [2024-06-10 10:19:13.648396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:08.178 [2024-06-10 10:19:13.648453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.178 [2024-06-10 10:19:13.648492] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b72f780 00:15:08.178 [2024-06-10 10:19:13.648500] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.178 [2024-06-10 10:19:13.648968] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.178 [2024-06-10 10:19:13.648991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.178 BaseBdev1 00:15:08.178 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:08.178 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:08.436 BaseBdev2_malloc 00:15:08.436 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:08.694 true 00:15:08.694 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:08.694 [2024-06-10 10:19:14.284413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:08.694 [2024-06-10 10:19:14.284457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.694 [2024-06-10 10:19:14.284479] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b72fc80 00:15:08.694 [2024-06-10 10:19:14.284486] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.694 [2024-06-10 10:19:14.284941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.694 [2024-06-10 10:19:14.284972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.694 BaseBdev2 00:15:08.952 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:08.953 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:09.211 BaseBdev3_malloc 00:15:09.211 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:09.470 true 00:15:09.470 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:09.729 [2024-06-10 10:19:15.120445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:09.729 [2024-06-10 10:19:15.120494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.729 [2024-06-10 10:19:15.120516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b730180 00:15:09.729 [2024-06-10 10:19:15.120524] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.729 [2024-06-10 10:19:15.120988] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.729 [2024-06-10 10:19:15.121017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:09.729 BaseBdev3 00:15:09.729 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:09.729 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:09.987 BaseBdev4_malloc 00:15:09.987 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:09.987 true 00:15:09.987 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:10.245 [2024-06-10 10:19:15.828468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:10.245 [2024-06-10 10:19:15.828518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.245 [2024-06-10 10:19:15.828542] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b730680 00:15:10.245 [2024-06-10 10:19:15.828550] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.245 [2024-06-10 10:19:15.829054] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.245 [2024-06-10 10:19:15.829084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:10.245 BaseBdev4 00:15:10.245 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:10.504 [2024-06-10 10:19:16.104489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.504 [2024-06-10 10:19:16.104892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.504 [2024-06-10 10:19:16.104907] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:10.504 [2024-06-10 10:19:16.104919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:10.504 [2024-06-10 10:19:16.104974] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b730900 00:15:10.504 [2024-06-10 10:19:16.104979] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:10.504 [2024-06-10 10:19:16.105011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b79be20 00:15:10.504 [2024-06-10 10:19:16.105065] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b730900 00:15:10.504 [2024-06-10 10:19:16.105068] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b730900 00:15:10.504 [2024-06-10 10:19:16.105104] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.762 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:10.762 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:10.762 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:10.762 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:10.762 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.763 "name": "raid_bdev1", 00:15:10.763 "uuid": "e411db5c-2712-11ef-b084-113036b5c18d", 00:15:10.763 "strip_size_kb": 64, 00:15:10.763 "state": "online", 00:15:10.763 "raid_level": "concat", 00:15:10.763 "superblock": true, 00:15:10.763 "num_base_bdevs": 4, 00:15:10.763 "num_base_bdevs_discovered": 4, 00:15:10.763 "num_base_bdevs_operational": 4, 00:15:10.763 "base_bdevs_list": [ 00:15:10.763 { 00:15:10.763 "name": "BaseBdev1", 00:15:10.763 "uuid": "6a2c505c-4d32-155f-8c8e-c59c0dfeb3b7", 00:15:10.763 "is_configured": true, 00:15:10.763 "data_offset": 2048, 00:15:10.763 "data_size": 63488 00:15:10.763 }, 00:15:10.763 { 00:15:10.763 "name": "BaseBdev2", 00:15:10.763 "uuid": "72025949-97ee-cb5d-b1a9-9a438960a44d", 00:15:10.763 "is_configured": true, 00:15:10.763 "data_offset": 2048, 00:15:10.763 "data_size": 63488 00:15:10.763 }, 00:15:10.763 { 00:15:10.763 "name": "BaseBdev3", 00:15:10.763 "uuid": "d6944626-787c-735a-b6dd-10ff3287d178", 00:15:10.763 "is_configured": true, 00:15:10.763 "data_offset": 2048, 00:15:10.763 "data_size": 63488 00:15:10.763 }, 00:15:10.763 { 00:15:10.763 "name": "BaseBdev4", 00:15:10.763 "uuid": "521a8cc1-98d2-985d-9814-df77685d846c", 00:15:10.763 "is_configured": true, 00:15:10.763 "data_offset": 2048, 00:15:10.763 "data_size": 63488 00:15:10.763 } 00:15:10.763 ] 00:15:10.763 }' 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.763 10:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.021 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:11.021 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:11.279 [2024-06-10 10:19:16.736565] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b79bec0 00:15:12.215 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.473 10:19:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.732 10:19:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.732 "name": "raid_bdev1", 00:15:12.732 "uuid": "e411db5c-2712-11ef-b084-113036b5c18d", 00:15:12.732 "strip_size_kb": 64, 00:15:12.732 "state": "online", 00:15:12.732 "raid_level": "concat", 00:15:12.732 "superblock": true, 00:15:12.732 "num_base_bdevs": 4, 00:15:12.732 "num_base_bdevs_discovered": 4, 00:15:12.732 "num_base_bdevs_operational": 4, 00:15:12.732 "base_bdevs_list": [ 00:15:12.732 { 00:15:12.732 "name": "BaseBdev1", 00:15:12.732 "uuid": "6a2c505c-4d32-155f-8c8e-c59c0dfeb3b7", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 }, 00:15:12.732 { 00:15:12.732 "name": "BaseBdev2", 00:15:12.732 "uuid": "72025949-97ee-cb5d-b1a9-9a438960a44d", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 }, 00:15:12.732 { 00:15:12.732 "name": "BaseBdev3", 00:15:12.732 "uuid": "d6944626-787c-735a-b6dd-10ff3287d178", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 }, 00:15:12.732 { 00:15:12.732 "name": "BaseBdev4", 00:15:12.732 "uuid": "521a8cc1-98d2-985d-9814-df77685d846c", 00:15:12.732 "is_configured": true, 00:15:12.732 "data_offset": 2048, 00:15:12.732 "data_size": 63488 00:15:12.732 } 00:15:12.732 ] 00:15:12.732 }' 00:15:12.732 10:19:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.732 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.991 10:19:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:13.250 [2024-06-10 10:19:18.785681] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.250 [2024-06-10 10:19:18.785708] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.250 [2024-06-10 10:19:18.785973] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.250 [2024-06-10 10:19:18.785982] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.250 [2024-06-10 10:19:18.785990] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.250 [2024-06-10 10:19:18.785995] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b730900 name raid_bdev1, state offline 00:15:13.250 0 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 63519 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 63519 ']' 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 63519 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 63519 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:15:13.250 killing process with pid 63519 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63519' 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 63519 00:15:13.250 [2024-06-10 10:19:18.813687] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.250 10:19:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 63519 00:15:13.250 [2024-06-10 10:19:18.832911] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BL6NmFnb 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:13.509 10:19:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:15:13.509 00:15:13.510 real 0m7.048s 00:15:13.510 user 0m11.272s 00:15:13.510 sys 0m1.080s 00:15:13.510 10:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:13.510 10:19:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 ************************************ 00:15:13.510 END TEST raid_read_error_test 00:15:13.510 ************************************ 00:15:13.510 10:19:19 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:13.510 10:19:19 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:13.510 10:19:19 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:13.510 10:19:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 ************************************ 00:15:13.510 START TEST raid_write_error_test 00:15:13.510 ************************************ 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test concat 4 write 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.nYCj7Nmq 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=63657 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 63657 /var/tmp/spdk-raid.sock 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 63657 ']' 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.510 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.510 [2024-06-10 10:19:19.068294] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:13.510 [2024-06-10 10:19:19.068503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:14.077 EAL: TSC is not safe to use in SMP mode 00:15:14.077 EAL: TSC is not invariant 00:15:14.077 [2024-06-10 10:19:19.578537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.077 [2024-06-10 10:19:19.660558] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:14.077 [2024-06-10 10:19:19.662719] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.077 [2024-06-10 10:19:19.663433] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.077 [2024-06-10 10:19:19.663444] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.643 10:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:14.644 10:19:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:15:14.644 10:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.644 10:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.901 BaseBdev1_malloc 00:15:14.901 10:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:15.159 true 00:15:15.159 10:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:15.418 [2024-06-10 10:19:20.810262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:15.418 [2024-06-10 10:19:20.810337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.418 [2024-06-10 10:19:20.810363] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ac0e780 00:15:15.418 [2024-06-10 10:19:20.810371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.418 [2024-06-10 10:19:20.810907] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.418 [2024-06-10 10:19:20.810978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.418 BaseBdev1 00:15:15.418 10:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:15.418 10:19:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.677 BaseBdev2_malloc 00:15:15.677 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:15.936 true 00:15:15.936 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:16.195 [2024-06-10 10:19:21.666283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:16.195 [2024-06-10 10:19:21.666339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.195 [2024-06-10 10:19:21.666361] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ac0ec80 00:15:16.195 [2024-06-10 10:19:21.666368] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.195 [2024-06-10 10:19:21.666874] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.195 [2024-06-10 10:19:21.666896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.195 BaseBdev2 00:15:16.195 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:16.195 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:16.454 BaseBdev3_malloc 00:15:16.454 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:16.711 true 00:15:16.711 10:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:16.969 [2024-06-10 10:19:22.486316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:16.969 [2024-06-10 10:19:22.486367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.969 [2024-06-10 10:19:22.486390] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ac0f180 00:15:16.969 [2024-06-10 10:19:22.486397] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.969 [2024-06-10 10:19:22.486899] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.969 [2024-06-10 10:19:22.486946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.969 BaseBdev3 00:15:16.969 10:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:16.969 10:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:17.228 BaseBdev4_malloc 00:15:17.228 10:19:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:17.516 true 00:15:17.516 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:17.774 [2024-06-10 10:19:23.310351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:17.774 [2024-06-10 10:19:23.310411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.774 [2024-06-10 10:19:23.310438] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ac0f680 00:15:17.774 [2024-06-10 10:19:23.310446] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.774 [2024-06-10 10:19:23.311025] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.774 [2024-06-10 10:19:23.311056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:17.774 BaseBdev4 00:15:17.774 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:18.031 [2024-06-10 10:19:23.590362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.031 [2024-06-10 10:19:23.590814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.031 [2024-06-10 10:19:23.590833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.032 [2024-06-10 10:19:23.590844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.032 [2024-06-10 10:19:23.590901] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ac0f900 00:15:18.032 [2024-06-10 10:19:23.590906] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:18.032 [2024-06-10 10:19:23.590938] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac7ae20 00:15:18.032 [2024-06-10 10:19:23.590993] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ac0f900 00:15:18.032 [2024-06-10 10:19:23.590997] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82ac0f900 00:15:18.032 [2024-06-10 10:19:23.591016] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.032 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.597 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.597 "name": "raid_bdev1", 00:15:18.597 "uuid": "e8881c42-2712-11ef-b084-113036b5c18d", 00:15:18.597 "strip_size_kb": 64, 00:15:18.597 "state": "online", 00:15:18.597 "raid_level": "concat", 00:15:18.597 "superblock": true, 00:15:18.597 "num_base_bdevs": 4, 00:15:18.597 "num_base_bdevs_discovered": 4, 00:15:18.597 "num_base_bdevs_operational": 4, 00:15:18.597 "base_bdevs_list": [ 00:15:18.597 { 00:15:18.597 "name": "BaseBdev1", 00:15:18.597 "uuid": "dc09454c-1d9c-5050-bdce-12faa137bb69", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev2", 00:15:18.597 "uuid": "67b63247-4f50-8e5b-b6f3-578ddad7019f", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev3", 00:15:18.597 "uuid": "9a4bf3cc-a344-d55b-acfb-fe22c0406fda", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev4", 00:15:18.597 "uuid": "aff76194-b16a-5658-8db0-3790df053e79", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 2048, 00:15:18.597 "data_size": 63488 00:15:18.597 } 00:15:18.597 ] 00:15:18.597 }' 00:15:18.597 10:19:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.597 10:19:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 10:19:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:18.597 10:19:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:18.855 [2024-06-10 10:19:24.278463] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac7aec0 00:15:19.841 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.098 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.355 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.355 "name": "raid_bdev1", 00:15:20.355 "uuid": "e8881c42-2712-11ef-b084-113036b5c18d", 00:15:20.355 "strip_size_kb": 64, 00:15:20.355 "state": "online", 00:15:20.355 "raid_level": "concat", 00:15:20.355 "superblock": true, 00:15:20.355 "num_base_bdevs": 4, 00:15:20.355 "num_base_bdevs_discovered": 4, 00:15:20.355 "num_base_bdevs_operational": 4, 00:15:20.355 "base_bdevs_list": [ 00:15:20.355 { 00:15:20.355 "name": "BaseBdev1", 00:15:20.355 "uuid": "dc09454c-1d9c-5050-bdce-12faa137bb69", 00:15:20.355 "is_configured": true, 00:15:20.355 "data_offset": 2048, 00:15:20.355 "data_size": 63488 00:15:20.356 }, 00:15:20.356 { 00:15:20.356 "name": "BaseBdev2", 00:15:20.356 "uuid": "67b63247-4f50-8e5b-b6f3-578ddad7019f", 00:15:20.356 "is_configured": true, 00:15:20.356 "data_offset": 2048, 00:15:20.356 "data_size": 63488 00:15:20.356 }, 00:15:20.356 { 00:15:20.356 "name": "BaseBdev3", 00:15:20.356 "uuid": "9a4bf3cc-a344-d55b-acfb-fe22c0406fda", 00:15:20.356 "is_configured": true, 00:15:20.356 "data_offset": 2048, 00:15:20.356 "data_size": 63488 00:15:20.356 }, 00:15:20.356 { 00:15:20.356 "name": "BaseBdev4", 00:15:20.356 "uuid": "aff76194-b16a-5658-8db0-3790df053e79", 00:15:20.356 "is_configured": true, 00:15:20.356 "data_offset": 2048, 00:15:20.356 "data_size": 63488 00:15:20.356 } 00:15:20.356 ] 00:15:20.356 }' 00:15:20.356 10:19:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.356 10:19:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.614 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:20.873 [2024-06-10 10:19:26.284252] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.873 [2024-06-10 10:19:26.284289] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.873 [2024-06-10 10:19:26.284703] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.873 [2024-06-10 10:19:26.284717] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.873 [2024-06-10 10:19:26.284731] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.873 [2024-06-10 10:19:26.284736] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ac0f900 name raid_bdev1, state offline 00:15:20.873 0 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 63657 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 63657 ']' 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 63657 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 63657 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63657' 00:15:20.873 killing process with pid 63657 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 63657 00:15:20.873 [2024-06-10 10:19:26.317162] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.873 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 63657 00:15:20.873 [2024-06-10 10:19:26.336809] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.nYCj7Nmq 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:15:21.133 00:15:21.133 real 0m7.466s 00:15:21.133 user 0m12.011s 00:15:21.133 sys 0m1.159s 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:21.133 ************************************ 00:15:21.133 END TEST raid_write_error_test 00:15:21.133 ************************************ 00:15:21.133 10:19:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.133 10:19:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:21.133 10:19:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:21.133 10:19:26 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:21.133 10:19:26 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:21.133 10:19:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.133 ************************************ 00:15:21.133 START TEST raid_state_function_test 00:15:21.133 ************************************ 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 4 false 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:21.133 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=63793 00:15:21.134 Process raid pid: 63793 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63793' 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 63793 /var/tmp/spdk-raid.sock 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@830 -- # '[' -z 63793 ']' 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:21.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:21.134 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.134 [2024-06-10 10:19:26.572738] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:21.134 [2024-06-10 10:19:26.572986] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:21.703 EAL: TSC is not safe to use in SMP mode 00:15:21.703 EAL: TSC is not invariant 00:15:21.703 [2024-06-10 10:19:27.060572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.703 [2024-06-10 10:19:27.137789] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:21.703 [2024-06-10 10:19:27.139881] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.703 [2024-06-10 10:19:27.140589] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.703 [2024-06-10 10:19:27.140601] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@863 -- # return 0 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:22.285 [2024-06-10 10:19:27.774804] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.285 [2024-06-10 10:19:27.774859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.285 [2024-06-10 10:19:27.774863] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.285 [2024-06-10 10:19:27.774871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.285 [2024-06-10 10:19:27.774874] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:22.285 [2024-06-10 10:19:27.774881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:22.285 [2024-06-10 10:19:27.774884] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:22.285 [2024-06-10 10:19:27.774890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.285 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.543 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.543 "name": "Existed_Raid", 00:15:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.543 "strip_size_kb": 0, 00:15:22.543 "state": "configuring", 00:15:22.543 "raid_level": "raid1", 00:15:22.543 "superblock": false, 00:15:22.543 "num_base_bdevs": 4, 00:15:22.543 "num_base_bdevs_discovered": 0, 00:15:22.543 "num_base_bdevs_operational": 4, 00:15:22.543 "base_bdevs_list": [ 00:15:22.543 { 00:15:22.543 "name": "BaseBdev1", 00:15:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.543 "is_configured": false, 00:15:22.543 "data_offset": 0, 00:15:22.543 "data_size": 0 00:15:22.543 }, 00:15:22.543 { 00:15:22.543 "name": "BaseBdev2", 00:15:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.543 "is_configured": false, 00:15:22.543 "data_offset": 0, 00:15:22.543 "data_size": 0 00:15:22.543 }, 00:15:22.543 { 00:15:22.543 "name": "BaseBdev3", 00:15:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.543 "is_configured": false, 00:15:22.543 "data_offset": 0, 00:15:22.543 "data_size": 0 00:15:22.543 }, 00:15:22.543 { 00:15:22.543 "name": "BaseBdev4", 00:15:22.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.543 "is_configured": false, 00:15:22.543 "data_offset": 0, 00:15:22.543 "data_size": 0 00:15:22.543 } 00:15:22.543 ] 00:15:22.543 }' 00:15:22.543 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.543 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.801 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:23.058 [2024-06-10 10:19:28.534816] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.058 [2024-06-10 10:19:28.534841] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a991500 name Existed_Raid, state configuring 00:15:23.058 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:23.316 [2024-06-10 10:19:28.810835] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.316 [2024-06-10 10:19:28.810901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.316 [2024-06-10 10:19:28.810905] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.316 [2024-06-10 10:19:28.810913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.316 [2024-06-10 10:19:28.810916] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.316 [2024-06-10 10:19:28.810923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.316 [2024-06-10 10:19:28.810925] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.316 [2024-06-10 10:19:28.810932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.316 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.574 [2024-06-10 10:19:29.031750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.574 BaseBdev1 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:23.574 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.929 [ 00:15:23.929 { 00:15:23.929 "name": "BaseBdev1", 00:15:23.929 "aliases": [ 00:15:23.929 "ebc64405-2712-11ef-b084-113036b5c18d" 00:15:23.929 ], 00:15:23.929 "product_name": "Malloc disk", 00:15:23.929 "block_size": 512, 00:15:23.929 "num_blocks": 65536, 00:15:23.929 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:23.929 "assigned_rate_limits": { 00:15:23.929 "rw_ios_per_sec": 0, 00:15:23.929 "rw_mbytes_per_sec": 0, 00:15:23.929 "r_mbytes_per_sec": 0, 00:15:23.929 "w_mbytes_per_sec": 0 00:15:23.929 }, 00:15:23.929 "claimed": true, 00:15:23.929 "claim_type": "exclusive_write", 00:15:23.929 "zoned": false, 00:15:23.929 "supported_io_types": { 00:15:23.929 "read": true, 00:15:23.929 "write": true, 00:15:23.929 "unmap": true, 00:15:23.929 "write_zeroes": true, 00:15:23.929 "flush": true, 00:15:23.929 "reset": true, 00:15:23.929 "compare": false, 00:15:23.929 "compare_and_write": false, 00:15:23.929 "abort": true, 00:15:23.929 "nvme_admin": false, 00:15:23.929 "nvme_io": false 00:15:23.929 }, 00:15:23.929 "memory_domains": [ 00:15:23.929 { 00:15:23.929 "dma_device_id": "system", 00:15:23.929 "dma_device_type": 1 00:15:23.929 }, 00:15:23.929 { 00:15:23.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.929 "dma_device_type": 2 00:15:23.929 } 00:15:23.929 ], 00:15:23.929 "driver_specific": {} 00:15:23.929 } 00:15:23.929 ] 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.929 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.187 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.187 "name": "Existed_Raid", 00:15:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.187 "strip_size_kb": 0, 00:15:24.187 "state": "configuring", 00:15:24.187 "raid_level": "raid1", 00:15:24.187 "superblock": false, 00:15:24.187 "num_base_bdevs": 4, 00:15:24.187 "num_base_bdevs_discovered": 1, 00:15:24.187 "num_base_bdevs_operational": 4, 00:15:24.187 "base_bdevs_list": [ 00:15:24.187 { 00:15:24.187 "name": "BaseBdev1", 00:15:24.187 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:24.187 "is_configured": true, 00:15:24.187 "data_offset": 0, 00:15:24.187 "data_size": 65536 00:15:24.187 }, 00:15:24.187 { 00:15:24.187 "name": "BaseBdev2", 00:15:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.187 "is_configured": false, 00:15:24.187 "data_offset": 0, 00:15:24.187 "data_size": 0 00:15:24.187 }, 00:15:24.187 { 00:15:24.187 "name": "BaseBdev3", 00:15:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.187 "is_configured": false, 00:15:24.187 "data_offset": 0, 00:15:24.187 "data_size": 0 00:15:24.187 }, 00:15:24.187 { 00:15:24.187 "name": "BaseBdev4", 00:15:24.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.187 "is_configured": false, 00:15:24.187 "data_offset": 0, 00:15:24.187 "data_size": 0 00:15:24.187 } 00:15:24.187 ] 00:15:24.187 }' 00:15:24.187 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.187 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.449 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.707 [2024-06-10 10:19:30.206894] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.707 [2024-06-10 10:19:30.206923] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a991500 name Existed_Raid, state configuring 00:15:24.707 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:24.966 [2024-06-10 10:19:30.522908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.966 [2024-06-10 10:19:30.523532] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.966 [2024-06-10 10:19:30.523589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.966 [2024-06-10 10:19:30.523594] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.966 [2024-06-10 10:19:30.523601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.966 [2024-06-10 10:19:30.523604] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.966 [2024-06-10 10:19:30.523611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.966 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.225 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.225 "name": "Existed_Raid", 00:15:25.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.225 "strip_size_kb": 0, 00:15:25.225 "state": "configuring", 00:15:25.225 "raid_level": "raid1", 00:15:25.225 "superblock": false, 00:15:25.225 "num_base_bdevs": 4, 00:15:25.225 "num_base_bdevs_discovered": 1, 00:15:25.225 "num_base_bdevs_operational": 4, 00:15:25.225 "base_bdevs_list": [ 00:15:25.225 { 00:15:25.225 "name": "BaseBdev1", 00:15:25.225 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:25.225 "is_configured": true, 00:15:25.225 "data_offset": 0, 00:15:25.225 "data_size": 65536 00:15:25.225 }, 00:15:25.225 { 00:15:25.225 "name": "BaseBdev2", 00:15:25.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.225 "is_configured": false, 00:15:25.225 "data_offset": 0, 00:15:25.225 "data_size": 0 00:15:25.225 }, 00:15:25.225 { 00:15:25.225 "name": "BaseBdev3", 00:15:25.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.225 "is_configured": false, 00:15:25.225 "data_offset": 0, 00:15:25.225 "data_size": 0 00:15:25.225 }, 00:15:25.225 { 00:15:25.225 "name": "BaseBdev4", 00:15:25.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.225 "is_configured": false, 00:15:25.225 "data_offset": 0, 00:15:25.225 "data_size": 0 00:15:25.225 } 00:15:25.225 ] 00:15:25.225 }' 00:15:25.225 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.225 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.484 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.743 [2024-06-10 10:19:31.251064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.743 BaseBdev2 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:25.743 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.001 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.259 [ 00:15:26.259 { 00:15:26.259 "name": "BaseBdev2", 00:15:26.259 "aliases": [ 00:15:26.259 "ed19069c-2712-11ef-b084-113036b5c18d" 00:15:26.259 ], 00:15:26.259 "product_name": "Malloc disk", 00:15:26.259 "block_size": 512, 00:15:26.259 "num_blocks": 65536, 00:15:26.259 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:26.259 "assigned_rate_limits": { 00:15:26.259 "rw_ios_per_sec": 0, 00:15:26.259 "rw_mbytes_per_sec": 0, 00:15:26.259 "r_mbytes_per_sec": 0, 00:15:26.259 "w_mbytes_per_sec": 0 00:15:26.259 }, 00:15:26.259 "claimed": true, 00:15:26.259 "claim_type": "exclusive_write", 00:15:26.259 "zoned": false, 00:15:26.259 "supported_io_types": { 00:15:26.259 "read": true, 00:15:26.259 "write": true, 00:15:26.259 "unmap": true, 00:15:26.259 "write_zeroes": true, 00:15:26.259 "flush": true, 00:15:26.259 "reset": true, 00:15:26.259 "compare": false, 00:15:26.259 "compare_and_write": false, 00:15:26.259 "abort": true, 00:15:26.259 "nvme_admin": false, 00:15:26.259 "nvme_io": false 00:15:26.259 }, 00:15:26.259 "memory_domains": [ 00:15:26.259 { 00:15:26.259 "dma_device_id": "system", 00:15:26.259 "dma_device_type": 1 00:15:26.259 }, 00:15:26.259 { 00:15:26.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.259 "dma_device_type": 2 00:15:26.259 } 00:15:26.259 ], 00:15:26.259 "driver_specific": {} 00:15:26.259 } 00:15:26.259 ] 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.259 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.518 10:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.518 "name": "Existed_Raid", 00:15:26.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.518 "strip_size_kb": 0, 00:15:26.518 "state": "configuring", 00:15:26.518 "raid_level": "raid1", 00:15:26.518 "superblock": false, 00:15:26.518 "num_base_bdevs": 4, 00:15:26.518 "num_base_bdevs_discovered": 2, 00:15:26.518 "num_base_bdevs_operational": 4, 00:15:26.518 "base_bdevs_list": [ 00:15:26.518 { 00:15:26.518 "name": "BaseBdev1", 00:15:26.518 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:26.518 "is_configured": true, 00:15:26.518 "data_offset": 0, 00:15:26.518 "data_size": 65536 00:15:26.518 }, 00:15:26.518 { 00:15:26.518 "name": "BaseBdev2", 00:15:26.518 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:26.518 "is_configured": true, 00:15:26.518 "data_offset": 0, 00:15:26.518 "data_size": 65536 00:15:26.518 }, 00:15:26.518 { 00:15:26.518 "name": "BaseBdev3", 00:15:26.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.518 "is_configured": false, 00:15:26.518 "data_offset": 0, 00:15:26.518 "data_size": 0 00:15:26.518 }, 00:15:26.518 { 00:15:26.518 "name": "BaseBdev4", 00:15:26.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.518 "is_configured": false, 00:15:26.518 "data_offset": 0, 00:15:26.518 "data_size": 0 00:15:26.518 } 00:15:26.518 ] 00:15:26.518 }' 00:15:26.518 10:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.518 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.777 10:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.035 [2024-06-10 10:19:32.495144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.035 BaseBdev3 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:27.035 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.293 10:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.553 [ 00:15:27.553 { 00:15:27.553 "name": "BaseBdev3", 00:15:27.553 "aliases": [ 00:15:27.553 "edd6dc15-2712-11ef-b084-113036b5c18d" 00:15:27.553 ], 00:15:27.553 "product_name": "Malloc disk", 00:15:27.553 "block_size": 512, 00:15:27.553 "num_blocks": 65536, 00:15:27.553 "uuid": "edd6dc15-2712-11ef-b084-113036b5c18d", 00:15:27.553 "assigned_rate_limits": { 00:15:27.553 "rw_ios_per_sec": 0, 00:15:27.553 "rw_mbytes_per_sec": 0, 00:15:27.553 "r_mbytes_per_sec": 0, 00:15:27.553 "w_mbytes_per_sec": 0 00:15:27.553 }, 00:15:27.553 "claimed": true, 00:15:27.553 "claim_type": "exclusive_write", 00:15:27.553 "zoned": false, 00:15:27.553 "supported_io_types": { 00:15:27.553 "read": true, 00:15:27.553 "write": true, 00:15:27.553 "unmap": true, 00:15:27.553 "write_zeroes": true, 00:15:27.553 "flush": true, 00:15:27.553 "reset": true, 00:15:27.553 "compare": false, 00:15:27.553 "compare_and_write": false, 00:15:27.553 "abort": true, 00:15:27.553 "nvme_admin": false, 00:15:27.553 "nvme_io": false 00:15:27.553 }, 00:15:27.553 "memory_domains": [ 00:15:27.553 { 00:15:27.553 "dma_device_id": "system", 00:15:27.553 "dma_device_type": 1 00:15:27.553 }, 00:15:27.553 { 00:15:27.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.553 "dma_device_type": 2 00:15:27.553 } 00:15:27.553 ], 00:15:27.553 "driver_specific": {} 00:15:27.553 } 00:15:27.553 ] 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.553 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.812 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.812 "name": "Existed_Raid", 00:15:27.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.812 "strip_size_kb": 0, 00:15:27.812 "state": "configuring", 00:15:27.812 "raid_level": "raid1", 00:15:27.812 "superblock": false, 00:15:27.812 "num_base_bdevs": 4, 00:15:27.812 "num_base_bdevs_discovered": 3, 00:15:27.812 "num_base_bdevs_operational": 4, 00:15:27.812 "base_bdevs_list": [ 00:15:27.812 { 00:15:27.812 "name": "BaseBdev1", 00:15:27.812 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:27.812 "is_configured": true, 00:15:27.812 "data_offset": 0, 00:15:27.812 "data_size": 65536 00:15:27.812 }, 00:15:27.812 { 00:15:27.812 "name": "BaseBdev2", 00:15:27.812 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:27.812 "is_configured": true, 00:15:27.812 "data_offset": 0, 00:15:27.812 "data_size": 65536 00:15:27.812 }, 00:15:27.812 { 00:15:27.812 "name": "BaseBdev3", 00:15:27.812 "uuid": "edd6dc15-2712-11ef-b084-113036b5c18d", 00:15:27.812 "is_configured": true, 00:15:27.812 "data_offset": 0, 00:15:27.812 "data_size": 65536 00:15:27.812 }, 00:15:27.812 { 00:15:27.812 "name": "BaseBdev4", 00:15:27.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.812 "is_configured": false, 00:15:27.812 "data_offset": 0, 00:15:27.812 "data_size": 0 00:15:27.812 } 00:15:27.812 ] 00:15:27.812 }' 00:15:27.812 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.812 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.071 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:28.331 [2024-06-10 10:19:33.863175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.331 [2024-06-10 10:19:33.863201] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a991a00 00:15:28.331 [2024-06-10 10:19:33.863205] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:28.331 [2024-06-10 10:19:33.863233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a9f4ec0 00:15:28.331 [2024-06-10 10:19:33.863317] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a991a00 00:15:28.331 [2024-06-10 10:19:33.863321] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a991a00 00:15:28.331 [2024-06-10 10:19:33.863346] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.331 BaseBdev4 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:28.331 10:19:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.599 10:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:28.858 [ 00:15:28.858 { 00:15:28.858 "name": "BaseBdev4", 00:15:28.858 "aliases": [ 00:15:28.858 "eea79b5a-2712-11ef-b084-113036b5c18d" 00:15:28.858 ], 00:15:28.858 "product_name": "Malloc disk", 00:15:28.858 "block_size": 512, 00:15:28.858 "num_blocks": 65536, 00:15:28.858 "uuid": "eea79b5a-2712-11ef-b084-113036b5c18d", 00:15:28.858 "assigned_rate_limits": { 00:15:28.858 "rw_ios_per_sec": 0, 00:15:28.858 "rw_mbytes_per_sec": 0, 00:15:28.858 "r_mbytes_per_sec": 0, 00:15:28.858 "w_mbytes_per_sec": 0 00:15:28.858 }, 00:15:28.858 "claimed": true, 00:15:28.858 "claim_type": "exclusive_write", 00:15:28.858 "zoned": false, 00:15:28.858 "supported_io_types": { 00:15:28.858 "read": true, 00:15:28.858 "write": true, 00:15:28.858 "unmap": true, 00:15:28.858 "write_zeroes": true, 00:15:28.858 "flush": true, 00:15:28.858 "reset": true, 00:15:28.858 "compare": false, 00:15:28.858 "compare_and_write": false, 00:15:28.858 "abort": true, 00:15:28.858 "nvme_admin": false, 00:15:28.858 "nvme_io": false 00:15:28.858 }, 00:15:28.858 "memory_domains": [ 00:15:28.858 { 00:15:28.858 "dma_device_id": "system", 00:15:28.858 "dma_device_type": 1 00:15:28.858 }, 00:15:28.858 { 00:15:28.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.858 "dma_device_type": 2 00:15:28.858 } 00:15:28.858 ], 00:15:28.858 "driver_specific": {} 00:15:28.858 } 00:15:28.858 ] 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.858 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.116 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.116 "name": "Existed_Raid", 00:15:29.116 "uuid": "eea79ff6-2712-11ef-b084-113036b5c18d", 00:15:29.116 "strip_size_kb": 0, 00:15:29.116 "state": "online", 00:15:29.116 "raid_level": "raid1", 00:15:29.116 "superblock": false, 00:15:29.116 "num_base_bdevs": 4, 00:15:29.116 "num_base_bdevs_discovered": 4, 00:15:29.117 "num_base_bdevs_operational": 4, 00:15:29.117 "base_bdevs_list": [ 00:15:29.117 { 00:15:29.117 "name": "BaseBdev1", 00:15:29.117 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:29.117 "is_configured": true, 00:15:29.117 "data_offset": 0, 00:15:29.117 "data_size": 65536 00:15:29.117 }, 00:15:29.117 { 00:15:29.117 "name": "BaseBdev2", 00:15:29.117 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:29.117 "is_configured": true, 00:15:29.117 "data_offset": 0, 00:15:29.117 "data_size": 65536 00:15:29.117 }, 00:15:29.117 { 00:15:29.117 "name": "BaseBdev3", 00:15:29.117 "uuid": "edd6dc15-2712-11ef-b084-113036b5c18d", 00:15:29.117 "is_configured": true, 00:15:29.117 "data_offset": 0, 00:15:29.117 "data_size": 65536 00:15:29.117 }, 00:15:29.117 { 00:15:29.117 "name": "BaseBdev4", 00:15:29.117 "uuid": "eea79b5a-2712-11ef-b084-113036b5c18d", 00:15:29.117 "is_configured": true, 00:15:29.117 "data_offset": 0, 00:15:29.117 "data_size": 65536 00:15:29.117 } 00:15:29.117 ] 00:15:29.117 }' 00:15:29.117 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.117 10:19:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:29.684 10:19:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:29.684 [2024-06-10 10:19:35.255183] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.684 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:29.684 "name": "Existed_Raid", 00:15:29.684 "aliases": [ 00:15:29.684 "eea79ff6-2712-11ef-b084-113036b5c18d" 00:15:29.684 ], 00:15:29.684 "product_name": "Raid Volume", 00:15:29.684 "block_size": 512, 00:15:29.684 "num_blocks": 65536, 00:15:29.684 "uuid": "eea79ff6-2712-11ef-b084-113036b5c18d", 00:15:29.684 "assigned_rate_limits": { 00:15:29.684 "rw_ios_per_sec": 0, 00:15:29.684 "rw_mbytes_per_sec": 0, 00:15:29.684 "r_mbytes_per_sec": 0, 00:15:29.684 "w_mbytes_per_sec": 0 00:15:29.684 }, 00:15:29.684 "claimed": false, 00:15:29.684 "zoned": false, 00:15:29.684 "supported_io_types": { 00:15:29.684 "read": true, 00:15:29.684 "write": true, 00:15:29.684 "unmap": false, 00:15:29.685 "write_zeroes": true, 00:15:29.685 "flush": false, 00:15:29.685 "reset": true, 00:15:29.685 "compare": false, 00:15:29.685 "compare_and_write": false, 00:15:29.685 "abort": false, 00:15:29.685 "nvme_admin": false, 00:15:29.685 "nvme_io": false 00:15:29.685 }, 00:15:29.685 "memory_domains": [ 00:15:29.685 { 00:15:29.685 "dma_device_id": "system", 00:15:29.685 "dma_device_type": 1 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.685 "dma_device_type": 2 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "system", 00:15:29.685 "dma_device_type": 1 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.685 "dma_device_type": 2 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "system", 00:15:29.685 "dma_device_type": 1 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.685 "dma_device_type": 2 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "system", 00:15:29.685 "dma_device_type": 1 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.685 "dma_device_type": 2 00:15:29.685 } 00:15:29.685 ], 00:15:29.685 "driver_specific": { 00:15:29.685 "raid": { 00:15:29.685 "uuid": "eea79ff6-2712-11ef-b084-113036b5c18d", 00:15:29.685 "strip_size_kb": 0, 00:15:29.685 "state": "online", 00:15:29.685 "raid_level": "raid1", 00:15:29.685 "superblock": false, 00:15:29.685 "num_base_bdevs": 4, 00:15:29.685 "num_base_bdevs_discovered": 4, 00:15:29.685 "num_base_bdevs_operational": 4, 00:15:29.685 "base_bdevs_list": [ 00:15:29.685 { 00:15:29.685 "name": "BaseBdev1", 00:15:29.685 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:29.685 "is_configured": true, 00:15:29.685 "data_offset": 0, 00:15:29.685 "data_size": 65536 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "name": "BaseBdev2", 00:15:29.685 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:29.685 "is_configured": true, 00:15:29.685 "data_offset": 0, 00:15:29.685 "data_size": 65536 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "name": "BaseBdev3", 00:15:29.685 "uuid": "edd6dc15-2712-11ef-b084-113036b5c18d", 00:15:29.685 "is_configured": true, 00:15:29.685 "data_offset": 0, 00:15:29.685 "data_size": 65536 00:15:29.685 }, 00:15:29.685 { 00:15:29.685 "name": "BaseBdev4", 00:15:29.685 "uuid": "eea79b5a-2712-11ef-b084-113036b5c18d", 00:15:29.685 "is_configured": true, 00:15:29.685 "data_offset": 0, 00:15:29.685 "data_size": 65536 00:15:29.685 } 00:15:29.685 ] 00:15:29.685 } 00:15:29.685 } 00:15:29.685 }' 00:15:29.685 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.685 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:29.685 BaseBdev2 00:15:29.685 BaseBdev3 00:15:29.685 BaseBdev4' 00:15:29.685 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:29.685 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:29.685 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.253 "name": "BaseBdev1", 00:15:30.253 "aliases": [ 00:15:30.253 "ebc64405-2712-11ef-b084-113036b5c18d" 00:15:30.253 ], 00:15:30.253 "product_name": "Malloc disk", 00:15:30.253 "block_size": 512, 00:15:30.253 "num_blocks": 65536, 00:15:30.253 "uuid": "ebc64405-2712-11ef-b084-113036b5c18d", 00:15:30.253 "assigned_rate_limits": { 00:15:30.253 "rw_ios_per_sec": 0, 00:15:30.253 "rw_mbytes_per_sec": 0, 00:15:30.253 "r_mbytes_per_sec": 0, 00:15:30.253 "w_mbytes_per_sec": 0 00:15:30.253 }, 00:15:30.253 "claimed": true, 00:15:30.253 "claim_type": "exclusive_write", 00:15:30.253 "zoned": false, 00:15:30.253 "supported_io_types": { 00:15:30.253 "read": true, 00:15:30.253 "write": true, 00:15:30.253 "unmap": true, 00:15:30.253 "write_zeroes": true, 00:15:30.253 "flush": true, 00:15:30.253 "reset": true, 00:15:30.253 "compare": false, 00:15:30.253 "compare_and_write": false, 00:15:30.253 "abort": true, 00:15:30.253 "nvme_admin": false, 00:15:30.253 "nvme_io": false 00:15:30.253 }, 00:15:30.253 "memory_domains": [ 00:15:30.253 { 00:15:30.253 "dma_device_id": "system", 00:15:30.253 "dma_device_type": 1 00:15:30.253 }, 00:15:30.253 { 00:15:30.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.253 "dma_device_type": 2 00:15:30.253 } 00:15:30.253 ], 00:15:30.253 "driver_specific": {} 00:15:30.253 }' 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:30.253 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.528 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.528 "name": "BaseBdev2", 00:15:30.528 "aliases": [ 00:15:30.528 "ed19069c-2712-11ef-b084-113036b5c18d" 00:15:30.528 ], 00:15:30.528 "product_name": "Malloc disk", 00:15:30.528 "block_size": 512, 00:15:30.528 "num_blocks": 65536, 00:15:30.528 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:30.528 "assigned_rate_limits": { 00:15:30.528 "rw_ios_per_sec": 0, 00:15:30.528 "rw_mbytes_per_sec": 0, 00:15:30.528 "r_mbytes_per_sec": 0, 00:15:30.528 "w_mbytes_per_sec": 0 00:15:30.529 }, 00:15:30.529 "claimed": true, 00:15:30.529 "claim_type": "exclusive_write", 00:15:30.529 "zoned": false, 00:15:30.529 "supported_io_types": { 00:15:30.529 "read": true, 00:15:30.529 "write": true, 00:15:30.529 "unmap": true, 00:15:30.529 "write_zeroes": true, 00:15:30.529 "flush": true, 00:15:30.529 "reset": true, 00:15:30.529 "compare": false, 00:15:30.529 "compare_and_write": false, 00:15:30.529 "abort": true, 00:15:30.529 "nvme_admin": false, 00:15:30.529 "nvme_io": false 00:15:30.529 }, 00:15:30.529 "memory_domains": [ 00:15:30.529 { 00:15:30.529 "dma_device_id": "system", 00:15:30.529 "dma_device_type": 1 00:15:30.529 }, 00:15:30.529 { 00:15:30.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.529 "dma_device_type": 2 00:15:30.529 } 00:15:30.529 ], 00:15:30.529 "driver_specific": {} 00:15:30.529 }' 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:30.529 10:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:30.787 "name": "BaseBdev3", 00:15:30.787 "aliases": [ 00:15:30.787 "edd6dc15-2712-11ef-b084-113036b5c18d" 00:15:30.787 ], 00:15:30.787 "product_name": "Malloc disk", 00:15:30.787 "block_size": 512, 00:15:30.787 "num_blocks": 65536, 00:15:30.787 "uuid": "edd6dc15-2712-11ef-b084-113036b5c18d", 00:15:30.787 "assigned_rate_limits": { 00:15:30.787 "rw_ios_per_sec": 0, 00:15:30.787 "rw_mbytes_per_sec": 0, 00:15:30.787 "r_mbytes_per_sec": 0, 00:15:30.787 "w_mbytes_per_sec": 0 00:15:30.787 }, 00:15:30.787 "claimed": true, 00:15:30.787 "claim_type": "exclusive_write", 00:15:30.787 "zoned": false, 00:15:30.787 "supported_io_types": { 00:15:30.787 "read": true, 00:15:30.787 "write": true, 00:15:30.787 "unmap": true, 00:15:30.787 "write_zeroes": true, 00:15:30.787 "flush": true, 00:15:30.787 "reset": true, 00:15:30.787 "compare": false, 00:15:30.787 "compare_and_write": false, 00:15:30.787 "abort": true, 00:15:30.787 "nvme_admin": false, 00:15:30.787 "nvme_io": false 00:15:30.787 }, 00:15:30.787 "memory_domains": [ 00:15:30.787 { 00:15:30.787 "dma_device_id": "system", 00:15:30.787 "dma_device_type": 1 00:15:30.787 }, 00:15:30.787 { 00:15:30.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.787 "dma_device_type": 2 00:15:30.787 } 00:15:30.787 ], 00:15:30.787 "driver_specific": {} 00:15:30.787 }' 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:30.787 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:31.046 "name": "BaseBdev4", 00:15:31.046 "aliases": [ 00:15:31.046 "eea79b5a-2712-11ef-b084-113036b5c18d" 00:15:31.046 ], 00:15:31.046 "product_name": "Malloc disk", 00:15:31.046 "block_size": 512, 00:15:31.046 "num_blocks": 65536, 00:15:31.046 "uuid": "eea79b5a-2712-11ef-b084-113036b5c18d", 00:15:31.046 "assigned_rate_limits": { 00:15:31.046 "rw_ios_per_sec": 0, 00:15:31.046 "rw_mbytes_per_sec": 0, 00:15:31.046 "r_mbytes_per_sec": 0, 00:15:31.046 "w_mbytes_per_sec": 0 00:15:31.046 }, 00:15:31.046 "claimed": true, 00:15:31.046 "claim_type": "exclusive_write", 00:15:31.046 "zoned": false, 00:15:31.046 "supported_io_types": { 00:15:31.046 "read": true, 00:15:31.046 "write": true, 00:15:31.046 "unmap": true, 00:15:31.046 "write_zeroes": true, 00:15:31.046 "flush": true, 00:15:31.046 "reset": true, 00:15:31.046 "compare": false, 00:15:31.046 "compare_and_write": false, 00:15:31.046 "abort": true, 00:15:31.046 "nvme_admin": false, 00:15:31.046 "nvme_io": false 00:15:31.046 }, 00:15:31.046 "memory_domains": [ 00:15:31.046 { 00:15:31.046 "dma_device_id": "system", 00:15:31.046 "dma_device_type": 1 00:15:31.046 }, 00:15:31.046 { 00:15:31.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.046 "dma_device_type": 2 00:15:31.046 } 00:15:31.046 ], 00:15:31.046 "driver_specific": {} 00:15:31.046 }' 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.046 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:31.305 [2024-06-10 10:19:36.855227] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.305 10:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.563 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.563 "name": "Existed_Raid", 00:15:31.563 "uuid": "eea79ff6-2712-11ef-b084-113036b5c18d", 00:15:31.563 "strip_size_kb": 0, 00:15:31.563 "state": "online", 00:15:31.563 "raid_level": "raid1", 00:15:31.563 "superblock": false, 00:15:31.563 "num_base_bdevs": 4, 00:15:31.563 "num_base_bdevs_discovered": 3, 00:15:31.563 "num_base_bdevs_operational": 3, 00:15:31.563 "base_bdevs_list": [ 00:15:31.563 { 00:15:31.563 "name": null, 00:15:31.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.563 "is_configured": false, 00:15:31.563 "data_offset": 0, 00:15:31.563 "data_size": 65536 00:15:31.563 }, 00:15:31.563 { 00:15:31.563 "name": "BaseBdev2", 00:15:31.563 "uuid": "ed19069c-2712-11ef-b084-113036b5c18d", 00:15:31.563 "is_configured": true, 00:15:31.563 "data_offset": 0, 00:15:31.563 "data_size": 65536 00:15:31.563 }, 00:15:31.563 { 00:15:31.563 "name": "BaseBdev3", 00:15:31.563 "uuid": "edd6dc15-2712-11ef-b084-113036b5c18d", 00:15:31.563 "is_configured": true, 00:15:31.563 "data_offset": 0, 00:15:31.563 "data_size": 65536 00:15:31.563 }, 00:15:31.563 { 00:15:31.563 "name": "BaseBdev4", 00:15:31.563 "uuid": "eea79b5a-2712-11ef-b084-113036b5c18d", 00:15:31.563 "is_configured": true, 00:15:31.563 "data_offset": 0, 00:15:31.563 "data_size": 65536 00:15:31.564 } 00:15:31.564 ] 00:15:31.564 }' 00:15:31.564 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.564 10:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:32.132 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:32.391 [2024-06-10 10:19:37.916009] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.391 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:32.391 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:32.391 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.391 10:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:32.692 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:32.692 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:32.692 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:32.973 [2024-06-10 10:19:38.412806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:32.973 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:32.973 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:32.973 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.973 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:33.231 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:33.231 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.231 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:33.490 [2024-06-10 10:19:38.861568] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:33.490 [2024-06-10 10:19:38.861603] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.490 [2024-06-10 10:19:38.866367] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.490 [2024-06-10 10:19:38.866382] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.490 [2024-06-10 10:19:38.866385] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a991a00 name Existed_Raid, state offline 00:15:33.490 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:33.490 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:33.490 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.490 10:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:33.748 BaseBdev2 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:33.748 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.006 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.582 [ 00:15:34.582 { 00:15:34.582 "name": "BaseBdev2", 00:15:34.582 "aliases": [ 00:15:34.582 "f1e592e1-2712-11ef-b084-113036b5c18d" 00:15:34.582 ], 00:15:34.582 "product_name": "Malloc disk", 00:15:34.582 "block_size": 512, 00:15:34.582 "num_blocks": 65536, 00:15:34.582 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:34.582 "assigned_rate_limits": { 00:15:34.582 "rw_ios_per_sec": 0, 00:15:34.582 "rw_mbytes_per_sec": 0, 00:15:34.582 "r_mbytes_per_sec": 0, 00:15:34.582 "w_mbytes_per_sec": 0 00:15:34.582 }, 00:15:34.582 "claimed": false, 00:15:34.582 "zoned": false, 00:15:34.582 "supported_io_types": { 00:15:34.582 "read": true, 00:15:34.582 "write": true, 00:15:34.582 "unmap": true, 00:15:34.582 "write_zeroes": true, 00:15:34.582 "flush": true, 00:15:34.582 "reset": true, 00:15:34.582 "compare": false, 00:15:34.582 "compare_and_write": false, 00:15:34.582 "abort": true, 00:15:34.582 "nvme_admin": false, 00:15:34.582 "nvme_io": false 00:15:34.582 }, 00:15:34.582 "memory_domains": [ 00:15:34.582 { 00:15:34.582 "dma_device_id": "system", 00:15:34.582 "dma_device_type": 1 00:15:34.582 }, 00:15:34.582 { 00:15:34.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.582 "dma_device_type": 2 00:15:34.582 } 00:15:34.582 ], 00:15:34.582 "driver_specific": {} 00:15:34.582 } 00:15:34.582 ] 00:15:34.582 10:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:34.582 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:34.582 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:34.582 10:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:34.582 BaseBdev3 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:34.841 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.099 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.358 [ 00:15:35.358 { 00:15:35.358 "name": "BaseBdev3", 00:15:35.358 "aliases": [ 00:15:35.358 "f26aa2e8-2712-11ef-b084-113036b5c18d" 00:15:35.358 ], 00:15:35.358 "product_name": "Malloc disk", 00:15:35.358 "block_size": 512, 00:15:35.358 "num_blocks": 65536, 00:15:35.358 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:35.358 "assigned_rate_limits": { 00:15:35.358 "rw_ios_per_sec": 0, 00:15:35.358 "rw_mbytes_per_sec": 0, 00:15:35.358 "r_mbytes_per_sec": 0, 00:15:35.358 "w_mbytes_per_sec": 0 00:15:35.358 }, 00:15:35.358 "claimed": false, 00:15:35.358 "zoned": false, 00:15:35.358 "supported_io_types": { 00:15:35.358 "read": true, 00:15:35.358 "write": true, 00:15:35.358 "unmap": true, 00:15:35.358 "write_zeroes": true, 00:15:35.358 "flush": true, 00:15:35.358 "reset": true, 00:15:35.358 "compare": false, 00:15:35.358 "compare_and_write": false, 00:15:35.358 "abort": true, 00:15:35.358 "nvme_admin": false, 00:15:35.358 "nvme_io": false 00:15:35.358 }, 00:15:35.358 "memory_domains": [ 00:15:35.358 { 00:15:35.358 "dma_device_id": "system", 00:15:35.358 "dma_device_type": 1 00:15:35.358 }, 00:15:35.358 { 00:15:35.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.358 "dma_device_type": 2 00:15:35.358 } 00:15:35.358 ], 00:15:35.358 "driver_specific": {} 00:15:35.358 } 00:15:35.358 ] 00:15:35.358 10:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:35.358 10:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:35.358 10:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:35.358 10:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:35.617 BaseBdev4 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:35.617 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.874 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:36.132 [ 00:15:36.132 { 00:15:36.132 "name": "BaseBdev4", 00:15:36.132 "aliases": [ 00:15:36.132 "f2faae95-2712-11ef-b084-113036b5c18d" 00:15:36.132 ], 00:15:36.132 "product_name": "Malloc disk", 00:15:36.132 "block_size": 512, 00:15:36.132 "num_blocks": 65536, 00:15:36.132 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:36.132 "assigned_rate_limits": { 00:15:36.132 "rw_ios_per_sec": 0, 00:15:36.132 "rw_mbytes_per_sec": 0, 00:15:36.132 "r_mbytes_per_sec": 0, 00:15:36.132 "w_mbytes_per_sec": 0 00:15:36.132 }, 00:15:36.132 "claimed": false, 00:15:36.132 "zoned": false, 00:15:36.132 "supported_io_types": { 00:15:36.132 "read": true, 00:15:36.132 "write": true, 00:15:36.132 "unmap": true, 00:15:36.132 "write_zeroes": true, 00:15:36.132 "flush": true, 00:15:36.132 "reset": true, 00:15:36.132 "compare": false, 00:15:36.132 "compare_and_write": false, 00:15:36.132 "abort": true, 00:15:36.132 "nvme_admin": false, 00:15:36.132 "nvme_io": false 00:15:36.132 }, 00:15:36.132 "memory_domains": [ 00:15:36.132 { 00:15:36.132 "dma_device_id": "system", 00:15:36.132 "dma_device_type": 1 00:15:36.132 }, 00:15:36.132 { 00:15:36.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.132 "dma_device_type": 2 00:15:36.132 } 00:15:36.132 ], 00:15:36.132 "driver_specific": {} 00:15:36.132 } 00:15:36.132 ] 00:15:36.132 10:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:36.132 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:36.132 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:36.132 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:36.391 [2024-06-10 10:19:41.970477] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.391 [2024-06-10 10:19:41.970528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.391 [2024-06-10 10:19:41.970536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.391 [2024-06-10 10:19:41.970957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.391 [2024-06-10 10:19:41.970974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.391 10:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.957 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.957 "name": "Existed_Raid", 00:15:36.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.957 "strip_size_kb": 0, 00:15:36.957 "state": "configuring", 00:15:36.957 "raid_level": "raid1", 00:15:36.957 "superblock": false, 00:15:36.957 "num_base_bdevs": 4, 00:15:36.957 "num_base_bdevs_discovered": 3, 00:15:36.957 "num_base_bdevs_operational": 4, 00:15:36.957 "base_bdevs_list": [ 00:15:36.957 { 00:15:36.957 "name": "BaseBdev1", 00:15:36.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.957 "is_configured": false, 00:15:36.957 "data_offset": 0, 00:15:36.957 "data_size": 0 00:15:36.957 }, 00:15:36.957 { 00:15:36.957 "name": "BaseBdev2", 00:15:36.957 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:36.957 "is_configured": true, 00:15:36.957 "data_offset": 0, 00:15:36.957 "data_size": 65536 00:15:36.957 }, 00:15:36.957 { 00:15:36.957 "name": "BaseBdev3", 00:15:36.957 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:36.957 "is_configured": true, 00:15:36.957 "data_offset": 0, 00:15:36.957 "data_size": 65536 00:15:36.957 }, 00:15:36.957 { 00:15:36.957 "name": "BaseBdev4", 00:15:36.957 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:36.957 "is_configured": true, 00:15:36.957 "data_offset": 0, 00:15:36.957 "data_size": 65536 00:15:36.957 } 00:15:36.957 ] 00:15:36.957 }' 00:15:36.957 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.957 10:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.216 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:37.474 [2024-06-10 10:19:42.874512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.474 10:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.734 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.734 "name": "Existed_Raid", 00:15:37.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.734 "strip_size_kb": 0, 00:15:37.734 "state": "configuring", 00:15:37.734 "raid_level": "raid1", 00:15:37.734 "superblock": false, 00:15:37.734 "num_base_bdevs": 4, 00:15:37.734 "num_base_bdevs_discovered": 2, 00:15:37.734 "num_base_bdevs_operational": 4, 00:15:37.734 "base_bdevs_list": [ 00:15:37.734 { 00:15:37.734 "name": "BaseBdev1", 00:15:37.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.734 "is_configured": false, 00:15:37.734 "data_offset": 0, 00:15:37.734 "data_size": 0 00:15:37.734 }, 00:15:37.734 { 00:15:37.734 "name": null, 00:15:37.734 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:37.734 "is_configured": false, 00:15:37.734 "data_offset": 0, 00:15:37.734 "data_size": 65536 00:15:37.734 }, 00:15:37.734 { 00:15:37.734 "name": "BaseBdev3", 00:15:37.734 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:37.734 "is_configured": true, 00:15:37.734 "data_offset": 0, 00:15:37.734 "data_size": 65536 00:15:37.734 }, 00:15:37.734 { 00:15:37.734 "name": "BaseBdev4", 00:15:37.734 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:37.734 "is_configured": true, 00:15:37.734 "data_offset": 0, 00:15:37.734 "data_size": 65536 00:15:37.734 } 00:15:37.734 ] 00:15:37.734 }' 00:15:37.734 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.734 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.993 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:37.993 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.251 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:38.251 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.510 [2024-06-10 10:19:43.974695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.510 BaseBdev1 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:38.510 10:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.769 10:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.028 [ 00:15:39.028 { 00:15:39.028 "name": "BaseBdev1", 00:15:39.028 "aliases": [ 00:15:39.028 "f4ae7fc6-2712-11ef-b084-113036b5c18d" 00:15:39.028 ], 00:15:39.028 "product_name": "Malloc disk", 00:15:39.028 "block_size": 512, 00:15:39.028 "num_blocks": 65536, 00:15:39.028 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:39.028 "assigned_rate_limits": { 00:15:39.028 "rw_ios_per_sec": 0, 00:15:39.028 "rw_mbytes_per_sec": 0, 00:15:39.028 "r_mbytes_per_sec": 0, 00:15:39.028 "w_mbytes_per_sec": 0 00:15:39.028 }, 00:15:39.028 "claimed": true, 00:15:39.028 "claim_type": "exclusive_write", 00:15:39.028 "zoned": false, 00:15:39.028 "supported_io_types": { 00:15:39.028 "read": true, 00:15:39.028 "write": true, 00:15:39.028 "unmap": true, 00:15:39.028 "write_zeroes": true, 00:15:39.029 "flush": true, 00:15:39.029 "reset": true, 00:15:39.029 "compare": false, 00:15:39.029 "compare_and_write": false, 00:15:39.029 "abort": true, 00:15:39.029 "nvme_admin": false, 00:15:39.029 "nvme_io": false 00:15:39.029 }, 00:15:39.029 "memory_domains": [ 00:15:39.029 { 00:15:39.029 "dma_device_id": "system", 00:15:39.029 "dma_device_type": 1 00:15:39.029 }, 00:15:39.029 { 00:15:39.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.029 "dma_device_type": 2 00:15:39.029 } 00:15:39.029 ], 00:15:39.029 "driver_specific": {} 00:15:39.029 } 00:15:39.029 ] 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.029 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.287 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.287 "name": "Existed_Raid", 00:15:39.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.287 "strip_size_kb": 0, 00:15:39.288 "state": "configuring", 00:15:39.288 "raid_level": "raid1", 00:15:39.288 "superblock": false, 00:15:39.288 "num_base_bdevs": 4, 00:15:39.288 "num_base_bdevs_discovered": 3, 00:15:39.288 "num_base_bdevs_operational": 4, 00:15:39.288 "base_bdevs_list": [ 00:15:39.288 { 00:15:39.288 "name": "BaseBdev1", 00:15:39.288 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:39.288 "is_configured": true, 00:15:39.288 "data_offset": 0, 00:15:39.288 "data_size": 65536 00:15:39.288 }, 00:15:39.288 { 00:15:39.288 "name": null, 00:15:39.288 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:39.288 "is_configured": false, 00:15:39.288 "data_offset": 0, 00:15:39.288 "data_size": 65536 00:15:39.288 }, 00:15:39.288 { 00:15:39.288 "name": "BaseBdev3", 00:15:39.288 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:39.288 "is_configured": true, 00:15:39.288 "data_offset": 0, 00:15:39.288 "data_size": 65536 00:15:39.288 }, 00:15:39.288 { 00:15:39.288 "name": "BaseBdev4", 00:15:39.288 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:39.288 "is_configured": true, 00:15:39.288 "data_offset": 0, 00:15:39.288 "data_size": 65536 00:15:39.288 } 00:15:39.288 ] 00:15:39.288 }' 00:15:39.288 10:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.288 10:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.547 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.547 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.806 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:39.806 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:40.066 [2024-06-10 10:19:45.554660] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.066 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.337 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.337 "name": "Existed_Raid", 00:15:40.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.337 "strip_size_kb": 0, 00:15:40.337 "state": "configuring", 00:15:40.337 "raid_level": "raid1", 00:15:40.337 "superblock": false, 00:15:40.337 "num_base_bdevs": 4, 00:15:40.337 "num_base_bdevs_discovered": 2, 00:15:40.337 "num_base_bdevs_operational": 4, 00:15:40.337 "base_bdevs_list": [ 00:15:40.337 { 00:15:40.337 "name": "BaseBdev1", 00:15:40.337 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:40.337 "is_configured": true, 00:15:40.337 "data_offset": 0, 00:15:40.337 "data_size": 65536 00:15:40.337 }, 00:15:40.337 { 00:15:40.337 "name": null, 00:15:40.337 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:40.337 "is_configured": false, 00:15:40.337 "data_offset": 0, 00:15:40.337 "data_size": 65536 00:15:40.337 }, 00:15:40.337 { 00:15:40.337 "name": null, 00:15:40.337 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:40.337 "is_configured": false, 00:15:40.337 "data_offset": 0, 00:15:40.337 "data_size": 65536 00:15:40.337 }, 00:15:40.337 { 00:15:40.337 "name": "BaseBdev4", 00:15:40.337 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:40.337 "is_configured": true, 00:15:40.337 "data_offset": 0, 00:15:40.337 "data_size": 65536 00:15:40.337 } 00:15:40.337 ] 00:15:40.337 }' 00:15:40.337 10:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.337 10:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.612 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.612 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.871 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:40.871 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:41.131 [2024-06-10 10:19:46.622729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.131 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.389 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.389 "name": "Existed_Raid", 00:15:41.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.389 "strip_size_kb": 0, 00:15:41.389 "state": "configuring", 00:15:41.389 "raid_level": "raid1", 00:15:41.389 "superblock": false, 00:15:41.389 "num_base_bdevs": 4, 00:15:41.389 "num_base_bdevs_discovered": 3, 00:15:41.389 "num_base_bdevs_operational": 4, 00:15:41.389 "base_bdevs_list": [ 00:15:41.389 { 00:15:41.389 "name": "BaseBdev1", 00:15:41.389 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:41.389 "is_configured": true, 00:15:41.389 "data_offset": 0, 00:15:41.389 "data_size": 65536 00:15:41.389 }, 00:15:41.389 { 00:15:41.389 "name": null, 00:15:41.389 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:41.389 "is_configured": false, 00:15:41.389 "data_offset": 0, 00:15:41.389 "data_size": 65536 00:15:41.389 }, 00:15:41.389 { 00:15:41.389 "name": "BaseBdev3", 00:15:41.389 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:41.389 "is_configured": true, 00:15:41.389 "data_offset": 0, 00:15:41.389 "data_size": 65536 00:15:41.389 }, 00:15:41.389 { 00:15:41.389 "name": "BaseBdev4", 00:15:41.389 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:41.389 "is_configured": true, 00:15:41.389 "data_offset": 0, 00:15:41.389 "data_size": 65536 00:15:41.389 } 00:15:41.389 ] 00:15:41.389 }' 00:15:41.389 10:19:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.389 10:19:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.651 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:41.909 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:41.909 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:42.187 [2024-06-10 10:19:47.666782] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.187 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.447 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.447 "name": "Existed_Raid", 00:15:42.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.447 "strip_size_kb": 0, 00:15:42.447 "state": "configuring", 00:15:42.447 "raid_level": "raid1", 00:15:42.447 "superblock": false, 00:15:42.447 "num_base_bdevs": 4, 00:15:42.447 "num_base_bdevs_discovered": 2, 00:15:42.447 "num_base_bdevs_operational": 4, 00:15:42.447 "base_bdevs_list": [ 00:15:42.447 { 00:15:42.447 "name": null, 00:15:42.447 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:42.447 "is_configured": false, 00:15:42.447 "data_offset": 0, 00:15:42.447 "data_size": 65536 00:15:42.447 }, 00:15:42.447 { 00:15:42.447 "name": null, 00:15:42.447 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:42.447 "is_configured": false, 00:15:42.447 "data_offset": 0, 00:15:42.447 "data_size": 65536 00:15:42.447 }, 00:15:42.447 { 00:15:42.447 "name": "BaseBdev3", 00:15:42.447 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:42.447 "is_configured": true, 00:15:42.447 "data_offset": 0, 00:15:42.447 "data_size": 65536 00:15:42.447 }, 00:15:42.447 { 00:15:42.447 "name": "BaseBdev4", 00:15:42.447 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:42.447 "is_configured": true, 00:15:42.447 "data_offset": 0, 00:15:42.447 "data_size": 65536 00:15:42.447 } 00:15:42.447 ] 00:15:42.447 }' 00:15:42.447 10:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.447 10:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.706 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.706 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:42.965 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:42.965 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:43.225 [2024-06-10 10:19:48.699545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.225 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.508 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.508 "name": "Existed_Raid", 00:15:43.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.508 "strip_size_kb": 0, 00:15:43.508 "state": "configuring", 00:15:43.508 "raid_level": "raid1", 00:15:43.508 "superblock": false, 00:15:43.508 "num_base_bdevs": 4, 00:15:43.508 "num_base_bdevs_discovered": 3, 00:15:43.508 "num_base_bdevs_operational": 4, 00:15:43.508 "base_bdevs_list": [ 00:15:43.508 { 00:15:43.508 "name": null, 00:15:43.508 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:43.508 "is_configured": false, 00:15:43.508 "data_offset": 0, 00:15:43.508 "data_size": 65536 00:15:43.508 }, 00:15:43.508 { 00:15:43.508 "name": "BaseBdev2", 00:15:43.508 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:43.508 "is_configured": true, 00:15:43.508 "data_offset": 0, 00:15:43.508 "data_size": 65536 00:15:43.508 }, 00:15:43.508 { 00:15:43.508 "name": "BaseBdev3", 00:15:43.508 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:43.508 "is_configured": true, 00:15:43.508 "data_offset": 0, 00:15:43.508 "data_size": 65536 00:15:43.508 }, 00:15:43.508 { 00:15:43.508 "name": "BaseBdev4", 00:15:43.508 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:43.508 "is_configured": true, 00:15:43.508 "data_offset": 0, 00:15:43.508 "data_size": 65536 00:15:43.508 } 00:15:43.508 ] 00:15:43.508 }' 00:15:43.508 10:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.508 10:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.767 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.767 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.035 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:44.035 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:44.035 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.311 10:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f4ae7fc6-2712-11ef-b084-113036b5c18d 00:15:44.569 [2024-06-10 10:19:49.987682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:44.569 [2024-06-10 10:19:49.987707] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a991f00 00:15:44.569 [2024-06-10 10:19:49.987711] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:44.569 [2024-06-10 10:19:49.987732] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a9f4e20 00:15:44.569 [2024-06-10 10:19:49.987787] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a991f00 00:15:44.569 [2024-06-10 10:19:49.987806] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a991f00 00:15:44.569 [2024-06-10 10:19:49.987835] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.569 NewBaseBdev 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local i 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:44.569 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.828 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:45.086 [ 00:15:45.086 { 00:15:45.086 "name": "NewBaseBdev", 00:15:45.086 "aliases": [ 00:15:45.086 "f4ae7fc6-2712-11ef-b084-113036b5c18d" 00:15:45.086 ], 00:15:45.086 "product_name": "Malloc disk", 00:15:45.086 "block_size": 512, 00:15:45.086 "num_blocks": 65536, 00:15:45.086 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:45.086 "assigned_rate_limits": { 00:15:45.086 "rw_ios_per_sec": 0, 00:15:45.086 "rw_mbytes_per_sec": 0, 00:15:45.086 "r_mbytes_per_sec": 0, 00:15:45.086 "w_mbytes_per_sec": 0 00:15:45.086 }, 00:15:45.086 "claimed": true, 00:15:45.086 "claim_type": "exclusive_write", 00:15:45.086 "zoned": false, 00:15:45.086 "supported_io_types": { 00:15:45.086 "read": true, 00:15:45.086 "write": true, 00:15:45.086 "unmap": true, 00:15:45.086 "write_zeroes": true, 00:15:45.086 "flush": true, 00:15:45.086 "reset": true, 00:15:45.086 "compare": false, 00:15:45.086 "compare_and_write": false, 00:15:45.086 "abort": true, 00:15:45.086 "nvme_admin": false, 00:15:45.086 "nvme_io": false 00:15:45.086 }, 00:15:45.086 "memory_domains": [ 00:15:45.086 { 00:15:45.086 "dma_device_id": "system", 00:15:45.086 "dma_device_type": 1 00:15:45.086 }, 00:15:45.086 { 00:15:45.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.086 "dma_device_type": 2 00:15:45.086 } 00:15:45.086 ], 00:15:45.086 "driver_specific": {} 00:15:45.086 } 00:15:45.086 ] 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # return 0 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.086 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.345 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.345 "name": "Existed_Raid", 00:15:45.345 "uuid": "f8440733-2712-11ef-b084-113036b5c18d", 00:15:45.345 "strip_size_kb": 0, 00:15:45.345 "state": "online", 00:15:45.345 "raid_level": "raid1", 00:15:45.345 "superblock": false, 00:15:45.345 "num_base_bdevs": 4, 00:15:45.345 "num_base_bdevs_discovered": 4, 00:15:45.345 "num_base_bdevs_operational": 4, 00:15:45.345 "base_bdevs_list": [ 00:15:45.345 { 00:15:45.345 "name": "NewBaseBdev", 00:15:45.345 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:45.345 "is_configured": true, 00:15:45.345 "data_offset": 0, 00:15:45.345 "data_size": 65536 00:15:45.345 }, 00:15:45.345 { 00:15:45.345 "name": "BaseBdev2", 00:15:45.345 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:45.345 "is_configured": true, 00:15:45.345 "data_offset": 0, 00:15:45.345 "data_size": 65536 00:15:45.345 }, 00:15:45.345 { 00:15:45.345 "name": "BaseBdev3", 00:15:45.345 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:45.345 "is_configured": true, 00:15:45.345 "data_offset": 0, 00:15:45.345 "data_size": 65536 00:15:45.345 }, 00:15:45.345 { 00:15:45.345 "name": "BaseBdev4", 00:15:45.345 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:45.345 "is_configured": true, 00:15:45.345 "data_offset": 0, 00:15:45.345 "data_size": 65536 00:15:45.345 } 00:15:45.345 ] 00:15:45.345 }' 00:15:45.345 10:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.345 10:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:45.603 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:45.862 [2024-06-10 10:19:51.291688] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.862 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:45.862 "name": "Existed_Raid", 00:15:45.862 "aliases": [ 00:15:45.862 "f8440733-2712-11ef-b084-113036b5c18d" 00:15:45.862 ], 00:15:45.862 "product_name": "Raid Volume", 00:15:45.862 "block_size": 512, 00:15:45.862 "num_blocks": 65536, 00:15:45.862 "uuid": "f8440733-2712-11ef-b084-113036b5c18d", 00:15:45.862 "assigned_rate_limits": { 00:15:45.862 "rw_ios_per_sec": 0, 00:15:45.862 "rw_mbytes_per_sec": 0, 00:15:45.862 "r_mbytes_per_sec": 0, 00:15:45.862 "w_mbytes_per_sec": 0 00:15:45.862 }, 00:15:45.862 "claimed": false, 00:15:45.862 "zoned": false, 00:15:45.862 "supported_io_types": { 00:15:45.862 "read": true, 00:15:45.862 "write": true, 00:15:45.862 "unmap": false, 00:15:45.862 "write_zeroes": true, 00:15:45.862 "flush": false, 00:15:45.862 "reset": true, 00:15:45.862 "compare": false, 00:15:45.862 "compare_and_write": false, 00:15:45.862 "abort": false, 00:15:45.862 "nvme_admin": false, 00:15:45.862 "nvme_io": false 00:15:45.862 }, 00:15:45.862 "memory_domains": [ 00:15:45.862 { 00:15:45.862 "dma_device_id": "system", 00:15:45.862 "dma_device_type": 1 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.862 "dma_device_type": 2 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "system", 00:15:45.862 "dma_device_type": 1 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.862 "dma_device_type": 2 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "system", 00:15:45.862 "dma_device_type": 1 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.862 "dma_device_type": 2 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "system", 00:15:45.862 "dma_device_type": 1 00:15:45.862 }, 00:15:45.862 { 00:15:45.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.862 "dma_device_type": 2 00:15:45.862 } 00:15:45.862 ], 00:15:45.862 "driver_specific": { 00:15:45.862 "raid": { 00:15:45.862 "uuid": "f8440733-2712-11ef-b084-113036b5c18d", 00:15:45.862 "strip_size_kb": 0, 00:15:45.862 "state": "online", 00:15:45.862 "raid_level": "raid1", 00:15:45.862 "superblock": false, 00:15:45.862 "num_base_bdevs": 4, 00:15:45.862 "num_base_bdevs_discovered": 4, 00:15:45.863 "num_base_bdevs_operational": 4, 00:15:45.863 "base_bdevs_list": [ 00:15:45.863 { 00:15:45.863 "name": "NewBaseBdev", 00:15:45.863 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:45.863 "is_configured": true, 00:15:45.863 "data_offset": 0, 00:15:45.863 "data_size": 65536 00:15:45.863 }, 00:15:45.863 { 00:15:45.863 "name": "BaseBdev2", 00:15:45.863 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:45.863 "is_configured": true, 00:15:45.863 "data_offset": 0, 00:15:45.863 "data_size": 65536 00:15:45.863 }, 00:15:45.863 { 00:15:45.863 "name": "BaseBdev3", 00:15:45.863 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:45.863 "is_configured": true, 00:15:45.863 "data_offset": 0, 00:15:45.863 "data_size": 65536 00:15:45.863 }, 00:15:45.863 { 00:15:45.863 "name": "BaseBdev4", 00:15:45.863 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:45.863 "is_configured": true, 00:15:45.863 "data_offset": 0, 00:15:45.863 "data_size": 65536 00:15:45.863 } 00:15:45.863 ] 00:15:45.863 } 00:15:45.863 } 00:15:45.863 }' 00:15:45.863 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.863 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:45.863 BaseBdev2 00:15:45.863 BaseBdev3 00:15:45.863 BaseBdev4' 00:15:45.863 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.863 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:45.863 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.121 "name": "NewBaseBdev", 00:15:46.121 "aliases": [ 00:15:46.121 "f4ae7fc6-2712-11ef-b084-113036b5c18d" 00:15:46.121 ], 00:15:46.121 "product_name": "Malloc disk", 00:15:46.121 "block_size": 512, 00:15:46.121 "num_blocks": 65536, 00:15:46.121 "uuid": "f4ae7fc6-2712-11ef-b084-113036b5c18d", 00:15:46.121 "assigned_rate_limits": { 00:15:46.121 "rw_ios_per_sec": 0, 00:15:46.121 "rw_mbytes_per_sec": 0, 00:15:46.121 "r_mbytes_per_sec": 0, 00:15:46.121 "w_mbytes_per_sec": 0 00:15:46.121 }, 00:15:46.121 "claimed": true, 00:15:46.121 "claim_type": "exclusive_write", 00:15:46.121 "zoned": false, 00:15:46.121 "supported_io_types": { 00:15:46.121 "read": true, 00:15:46.121 "write": true, 00:15:46.121 "unmap": true, 00:15:46.121 "write_zeroes": true, 00:15:46.121 "flush": true, 00:15:46.121 "reset": true, 00:15:46.121 "compare": false, 00:15:46.121 "compare_and_write": false, 00:15:46.121 "abort": true, 00:15:46.121 "nvme_admin": false, 00:15:46.121 "nvme_io": false 00:15:46.121 }, 00:15:46.121 "memory_domains": [ 00:15:46.121 { 00:15:46.121 "dma_device_id": "system", 00:15:46.121 "dma_device_type": 1 00:15:46.121 }, 00:15:46.121 { 00:15:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.121 "dma_device_type": 2 00:15:46.121 } 00:15:46.121 ], 00:15:46.121 "driver_specific": {} 00:15:46.121 }' 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.121 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.380 "name": "BaseBdev2", 00:15:46.380 "aliases": [ 00:15:46.380 "f1e592e1-2712-11ef-b084-113036b5c18d" 00:15:46.380 ], 00:15:46.380 "product_name": "Malloc disk", 00:15:46.380 "block_size": 512, 00:15:46.380 "num_blocks": 65536, 00:15:46.380 "uuid": "f1e592e1-2712-11ef-b084-113036b5c18d", 00:15:46.380 "assigned_rate_limits": { 00:15:46.380 "rw_ios_per_sec": 0, 00:15:46.380 "rw_mbytes_per_sec": 0, 00:15:46.380 "r_mbytes_per_sec": 0, 00:15:46.380 "w_mbytes_per_sec": 0 00:15:46.380 }, 00:15:46.380 "claimed": true, 00:15:46.380 "claim_type": "exclusive_write", 00:15:46.380 "zoned": false, 00:15:46.380 "supported_io_types": { 00:15:46.380 "read": true, 00:15:46.380 "write": true, 00:15:46.380 "unmap": true, 00:15:46.380 "write_zeroes": true, 00:15:46.380 "flush": true, 00:15:46.380 "reset": true, 00:15:46.380 "compare": false, 00:15:46.380 "compare_and_write": false, 00:15:46.380 "abort": true, 00:15:46.380 "nvme_admin": false, 00:15:46.380 "nvme_io": false 00:15:46.380 }, 00:15:46.380 "memory_domains": [ 00:15:46.380 { 00:15:46.380 "dma_device_id": "system", 00:15:46.380 "dma_device_type": 1 00:15:46.380 }, 00:15:46.380 { 00:15:46.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.380 "dma_device_type": 2 00:15:46.380 } 00:15:46.380 ], 00:15:46.380 "driver_specific": {} 00:15:46.380 }' 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:46.380 10:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.641 "name": "BaseBdev3", 00:15:46.641 "aliases": [ 00:15:46.641 "f26aa2e8-2712-11ef-b084-113036b5c18d" 00:15:46.641 ], 00:15:46.641 "product_name": "Malloc disk", 00:15:46.641 "block_size": 512, 00:15:46.641 "num_blocks": 65536, 00:15:46.641 "uuid": "f26aa2e8-2712-11ef-b084-113036b5c18d", 00:15:46.641 "assigned_rate_limits": { 00:15:46.641 "rw_ios_per_sec": 0, 00:15:46.641 "rw_mbytes_per_sec": 0, 00:15:46.641 "r_mbytes_per_sec": 0, 00:15:46.641 "w_mbytes_per_sec": 0 00:15:46.641 }, 00:15:46.641 "claimed": true, 00:15:46.641 "claim_type": "exclusive_write", 00:15:46.641 "zoned": false, 00:15:46.641 "supported_io_types": { 00:15:46.641 "read": true, 00:15:46.641 "write": true, 00:15:46.641 "unmap": true, 00:15:46.641 "write_zeroes": true, 00:15:46.641 "flush": true, 00:15:46.641 "reset": true, 00:15:46.641 "compare": false, 00:15:46.641 "compare_and_write": false, 00:15:46.641 "abort": true, 00:15:46.641 "nvme_admin": false, 00:15:46.641 "nvme_io": false 00:15:46.641 }, 00:15:46.641 "memory_domains": [ 00:15:46.641 { 00:15:46.641 "dma_device_id": "system", 00:15:46.641 "dma_device_type": 1 00:15:46.641 }, 00:15:46.641 { 00:15:46.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.641 "dma_device_type": 2 00:15:46.641 } 00:15:46.641 ], 00:15:46.641 "driver_specific": {} 00:15:46.641 }' 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.641 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.900 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.900 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:46.900 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:46.900 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.159 "name": "BaseBdev4", 00:15:47.159 "aliases": [ 00:15:47.159 "f2faae95-2712-11ef-b084-113036b5c18d" 00:15:47.159 ], 00:15:47.159 "product_name": "Malloc disk", 00:15:47.159 "block_size": 512, 00:15:47.159 "num_blocks": 65536, 00:15:47.159 "uuid": "f2faae95-2712-11ef-b084-113036b5c18d", 00:15:47.159 "assigned_rate_limits": { 00:15:47.159 "rw_ios_per_sec": 0, 00:15:47.159 "rw_mbytes_per_sec": 0, 00:15:47.159 "r_mbytes_per_sec": 0, 00:15:47.159 "w_mbytes_per_sec": 0 00:15:47.159 }, 00:15:47.159 "claimed": true, 00:15:47.159 "claim_type": "exclusive_write", 00:15:47.159 "zoned": false, 00:15:47.159 "supported_io_types": { 00:15:47.159 "read": true, 00:15:47.159 "write": true, 00:15:47.159 "unmap": true, 00:15:47.159 "write_zeroes": true, 00:15:47.159 "flush": true, 00:15:47.159 "reset": true, 00:15:47.159 "compare": false, 00:15:47.159 "compare_and_write": false, 00:15:47.159 "abort": true, 00:15:47.159 "nvme_admin": false, 00:15:47.159 "nvme_io": false 00:15:47.159 }, 00:15:47.159 "memory_domains": [ 00:15:47.159 { 00:15:47.159 "dma_device_id": "system", 00:15:47.159 "dma_device_type": 1 00:15:47.159 }, 00:15:47.159 { 00:15:47.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.159 "dma_device_type": 2 00:15:47.159 } 00:15:47.159 ], 00:15:47.159 "driver_specific": {} 00:15:47.159 }' 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.159 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:47.418 [2024-06-10 10:19:52.911704] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.418 [2024-06-10 10:19:52.911733] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.418 [2024-06-10 10:19:52.911753] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.418 [2024-06-10 10:19:52.911822] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.418 [2024-06-10 10:19:52.911827] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a991f00 name Existed_Raid, state offline 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 63793 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@949 -- # '[' -z 63793 ']' 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # kill -0 63793 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # uname 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # ps -c -o command 63793 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # tail -1 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:15:47.418 killing process with pid 63793 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 63793' 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # kill 63793 00:15:47.418 [2024-06-10 10:19:52.940001] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.418 10:19:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # wait 63793 00:15:47.418 [2024-06-10 10:19:52.959065] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.677 10:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:47.677 00:15:47.677 real 0m26.571s 00:15:47.677 user 0m48.450s 00:15:47.677 sys 0m3.915s 00:15:47.677 ************************************ 00:15:47.677 END TEST raid_state_function_test 00:15:47.677 ************************************ 00:15:47.677 10:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:47.677 10:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.677 10:19:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:47.677 10:19:53 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:15:47.677 10:19:53 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:47.678 10:19:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.678 ************************************ 00:15:47.678 START TEST raid_state_function_test_sb 00:15:47.678 ************************************ 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 4 true 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=64608 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 64608' 00:15:47.678 Process raid pid: 64608 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 64608 /var/tmp/spdk-raid.sock 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@830 -- # '[' -z 64608 ']' 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:47.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:47.678 10:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.678 [2024-06-10 10:19:53.186439] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:15:47.678 [2024-06-10 10:19:53.186658] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:15:48.043 EAL: TSC is not safe to use in SMP mode 00:15:48.043 EAL: TSC is not invariant 00:15:48.302 [2024-06-10 10:19:53.647826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.302 [2024-06-10 10:19:53.728563] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:48.302 [2024-06-10 10:19:53.730705] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.302 [2024-06-10 10:19:53.731454] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.302 [2024-06-10 10:19:53.731467] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.869 10:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:48.869 10:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@863 -- # return 0 00:15:48.869 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:49.127 [2024-06-10 10:19:54.514110] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.127 [2024-06-10 10:19:54.514177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.127 [2024-06-10 10:19:54.514182] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.127 [2024-06-10 10:19:54.514191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.127 [2024-06-10 10:19:54.514194] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.127 [2024-06-10 10:19:54.514202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.127 [2024-06-10 10:19:54.514205] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.127 [2024-06-10 10:19:54.514212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.127 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.386 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.386 "name": "Existed_Raid", 00:15:49.386 "uuid": "faf6b32a-2712-11ef-b084-113036b5c18d", 00:15:49.386 "strip_size_kb": 0, 00:15:49.386 "state": "configuring", 00:15:49.386 "raid_level": "raid1", 00:15:49.386 "superblock": true, 00:15:49.386 "num_base_bdevs": 4, 00:15:49.386 "num_base_bdevs_discovered": 0, 00:15:49.386 "num_base_bdevs_operational": 4, 00:15:49.386 "base_bdevs_list": [ 00:15:49.386 { 00:15:49.387 "name": "BaseBdev1", 00:15:49.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.387 "is_configured": false, 00:15:49.387 "data_offset": 0, 00:15:49.387 "data_size": 0 00:15:49.387 }, 00:15:49.387 { 00:15:49.387 "name": "BaseBdev2", 00:15:49.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.387 "is_configured": false, 00:15:49.387 "data_offset": 0, 00:15:49.387 "data_size": 0 00:15:49.387 }, 00:15:49.387 { 00:15:49.387 "name": "BaseBdev3", 00:15:49.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.387 "is_configured": false, 00:15:49.387 "data_offset": 0, 00:15:49.387 "data_size": 0 00:15:49.387 }, 00:15:49.387 { 00:15:49.387 "name": "BaseBdev4", 00:15:49.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.387 "is_configured": false, 00:15:49.387 "data_offset": 0, 00:15:49.387 "data_size": 0 00:15:49.387 } 00:15:49.387 ] 00:15:49.387 }' 00:15:49.387 10:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.387 10:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.646 10:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:49.937 [2024-06-10 10:19:55.470114] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.937 [2024-06-10 10:19:55.470140] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e04b500 name Existed_Raid, state configuring 00:15:49.937 10:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:50.196 [2024-06-10 10:19:55.690126] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.196 [2024-06-10 10:19:55.690174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.196 [2024-06-10 10:19:55.690178] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.196 [2024-06-10 10:19:55.690186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.196 [2024-06-10 10:19:55.690189] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.196 [2024-06-10 10:19:55.690196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.196 [2024-06-10 10:19:55.690199] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.196 [2024-06-10 10:19:55.690205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.196 10:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.454 [2024-06-10 10:19:55.975026] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.454 BaseBdev1 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:50.454 10:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:50.712 10:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.971 [ 00:15:50.971 { 00:15:50.971 "name": "BaseBdev1", 00:15:50.971 "aliases": [ 00:15:50.971 "fbd57c15-2712-11ef-b084-113036b5c18d" 00:15:50.971 ], 00:15:50.971 "product_name": "Malloc disk", 00:15:50.971 "block_size": 512, 00:15:50.971 "num_blocks": 65536, 00:15:50.971 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:50.971 "assigned_rate_limits": { 00:15:50.971 "rw_ios_per_sec": 0, 00:15:50.971 "rw_mbytes_per_sec": 0, 00:15:50.971 "r_mbytes_per_sec": 0, 00:15:50.971 "w_mbytes_per_sec": 0 00:15:50.971 }, 00:15:50.971 "claimed": true, 00:15:50.971 "claim_type": "exclusive_write", 00:15:50.971 "zoned": false, 00:15:50.971 "supported_io_types": { 00:15:50.971 "read": true, 00:15:50.971 "write": true, 00:15:50.971 "unmap": true, 00:15:50.971 "write_zeroes": true, 00:15:50.971 "flush": true, 00:15:50.971 "reset": true, 00:15:50.971 "compare": false, 00:15:50.971 "compare_and_write": false, 00:15:50.971 "abort": true, 00:15:50.971 "nvme_admin": false, 00:15:50.971 "nvme_io": false 00:15:50.971 }, 00:15:50.971 "memory_domains": [ 00:15:50.971 { 00:15:50.971 "dma_device_id": "system", 00:15:50.971 "dma_device_type": 1 00:15:50.971 }, 00:15:50.971 { 00:15:50.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.971 "dma_device_type": 2 00:15:50.971 } 00:15:50.971 ], 00:15:50.971 "driver_specific": {} 00:15:50.971 } 00:15:50.971 ] 00:15:50.971 10:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.972 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.230 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.230 "name": "Existed_Raid", 00:15:51.230 "uuid": "fbaa255c-2712-11ef-b084-113036b5c18d", 00:15:51.230 "strip_size_kb": 0, 00:15:51.230 "state": "configuring", 00:15:51.230 "raid_level": "raid1", 00:15:51.230 "superblock": true, 00:15:51.230 "num_base_bdevs": 4, 00:15:51.230 "num_base_bdevs_discovered": 1, 00:15:51.230 "num_base_bdevs_operational": 4, 00:15:51.230 "base_bdevs_list": [ 00:15:51.230 { 00:15:51.230 "name": "BaseBdev1", 00:15:51.230 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:51.230 "is_configured": true, 00:15:51.230 "data_offset": 2048, 00:15:51.230 "data_size": 63488 00:15:51.230 }, 00:15:51.230 { 00:15:51.230 "name": "BaseBdev2", 00:15:51.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.230 "is_configured": false, 00:15:51.230 "data_offset": 0, 00:15:51.230 "data_size": 0 00:15:51.230 }, 00:15:51.230 { 00:15:51.230 "name": "BaseBdev3", 00:15:51.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.230 "is_configured": false, 00:15:51.230 "data_offset": 0, 00:15:51.230 "data_size": 0 00:15:51.230 }, 00:15:51.230 { 00:15:51.230 "name": "BaseBdev4", 00:15:51.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.230 "is_configured": false, 00:15:51.230 "data_offset": 0, 00:15:51.230 "data_size": 0 00:15:51.230 } 00:15:51.230 ] 00:15:51.230 }' 00:15:51.230 10:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.230 10:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.796 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.054 [2024-06-10 10:19:57.446208] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.054 [2024-06-10 10:19:57.446250] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e04b500 name Existed_Raid, state configuring 00:15:52.054 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:52.313 [2024-06-10 10:19:57.678228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.313 [2024-06-10 10:19:57.678933] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.313 [2024-06-10 10:19:57.678973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.313 [2024-06-10 10:19:57.678977] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.313 [2024-06-10 10:19:57.678985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.313 [2024-06-10 10:19:57.678989] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.313 [2024-06-10 10:19:57.678995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.313 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.572 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.572 "name": "Existed_Raid", 00:15:52.572 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:52.572 "strip_size_kb": 0, 00:15:52.572 "state": "configuring", 00:15:52.572 "raid_level": "raid1", 00:15:52.572 "superblock": true, 00:15:52.572 "num_base_bdevs": 4, 00:15:52.572 "num_base_bdevs_discovered": 1, 00:15:52.572 "num_base_bdevs_operational": 4, 00:15:52.572 "base_bdevs_list": [ 00:15:52.572 { 00:15:52.572 "name": "BaseBdev1", 00:15:52.572 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:52.572 "is_configured": true, 00:15:52.572 "data_offset": 2048, 00:15:52.572 "data_size": 63488 00:15:52.572 }, 00:15:52.572 { 00:15:52.572 "name": "BaseBdev2", 00:15:52.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.572 "is_configured": false, 00:15:52.572 "data_offset": 0, 00:15:52.572 "data_size": 0 00:15:52.572 }, 00:15:52.572 { 00:15:52.572 "name": "BaseBdev3", 00:15:52.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.572 "is_configured": false, 00:15:52.572 "data_offset": 0, 00:15:52.572 "data_size": 0 00:15:52.572 }, 00:15:52.572 { 00:15:52.572 "name": "BaseBdev4", 00:15:52.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.572 "is_configured": false, 00:15:52.572 "data_offset": 0, 00:15:52.572 "data_size": 0 00:15:52.572 } 00:15:52.572 ] 00:15:52.572 }' 00:15:52.573 10:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.573 10:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.833 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.091 [2024-06-10 10:19:58.474393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.091 BaseBdev2 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:53.091 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.350 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.350 [ 00:15:53.350 { 00:15:53.350 "name": "BaseBdev2", 00:15:53.350 "aliases": [ 00:15:53.351 "fd52fa05-2712-11ef-b084-113036b5c18d" 00:15:53.351 ], 00:15:53.351 "product_name": "Malloc disk", 00:15:53.351 "block_size": 512, 00:15:53.351 "num_blocks": 65536, 00:15:53.351 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:53.351 "assigned_rate_limits": { 00:15:53.351 "rw_ios_per_sec": 0, 00:15:53.351 "rw_mbytes_per_sec": 0, 00:15:53.351 "r_mbytes_per_sec": 0, 00:15:53.351 "w_mbytes_per_sec": 0 00:15:53.351 }, 00:15:53.351 "claimed": true, 00:15:53.351 "claim_type": "exclusive_write", 00:15:53.351 "zoned": false, 00:15:53.351 "supported_io_types": { 00:15:53.351 "read": true, 00:15:53.351 "write": true, 00:15:53.351 "unmap": true, 00:15:53.351 "write_zeroes": true, 00:15:53.351 "flush": true, 00:15:53.351 "reset": true, 00:15:53.351 "compare": false, 00:15:53.351 "compare_and_write": false, 00:15:53.351 "abort": true, 00:15:53.351 "nvme_admin": false, 00:15:53.351 "nvme_io": false 00:15:53.351 }, 00:15:53.351 "memory_domains": [ 00:15:53.351 { 00:15:53.351 "dma_device_id": "system", 00:15:53.351 "dma_device_type": 1 00:15:53.351 }, 00:15:53.351 { 00:15:53.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.351 "dma_device_type": 2 00:15:53.351 } 00:15:53.351 ], 00:15:53.351 "driver_specific": {} 00:15:53.351 } 00:15:53.351 ] 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.351 10:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.610 10:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.610 "name": "Existed_Raid", 00:15:53.610 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:53.610 "strip_size_kb": 0, 00:15:53.610 "state": "configuring", 00:15:53.610 "raid_level": "raid1", 00:15:53.610 "superblock": true, 00:15:53.610 "num_base_bdevs": 4, 00:15:53.610 "num_base_bdevs_discovered": 2, 00:15:53.610 "num_base_bdevs_operational": 4, 00:15:53.610 "base_bdevs_list": [ 00:15:53.610 { 00:15:53.610 "name": "BaseBdev1", 00:15:53.610 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:53.610 "is_configured": true, 00:15:53.610 "data_offset": 2048, 00:15:53.610 "data_size": 63488 00:15:53.610 }, 00:15:53.610 { 00:15:53.610 "name": "BaseBdev2", 00:15:53.610 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:53.610 "is_configured": true, 00:15:53.610 "data_offset": 2048, 00:15:53.610 "data_size": 63488 00:15:53.610 }, 00:15:53.610 { 00:15:53.610 "name": "BaseBdev3", 00:15:53.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.610 "is_configured": false, 00:15:53.610 "data_offset": 0, 00:15:53.610 "data_size": 0 00:15:53.610 }, 00:15:53.610 { 00:15:53.610 "name": "BaseBdev4", 00:15:53.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.610 "is_configured": false, 00:15:53.610 "data_offset": 0, 00:15:53.610 "data_size": 0 00:15:53.610 } 00:15:53.610 ] 00:15:53.610 }' 00:15:53.610 10:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.610 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.209 [2024-06-10 10:19:59.738505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.209 BaseBdev3 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:54.209 10:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:54.542 10:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:54.803 [ 00:15:54.803 { 00:15:54.803 "name": "BaseBdev3", 00:15:54.803 "aliases": [ 00:15:54.803 "fe13dd70-2712-11ef-b084-113036b5c18d" 00:15:54.803 ], 00:15:54.803 "product_name": "Malloc disk", 00:15:54.803 "block_size": 512, 00:15:54.803 "num_blocks": 65536, 00:15:54.803 "uuid": "fe13dd70-2712-11ef-b084-113036b5c18d", 00:15:54.803 "assigned_rate_limits": { 00:15:54.803 "rw_ios_per_sec": 0, 00:15:54.803 "rw_mbytes_per_sec": 0, 00:15:54.803 "r_mbytes_per_sec": 0, 00:15:54.803 "w_mbytes_per_sec": 0 00:15:54.803 }, 00:15:54.803 "claimed": true, 00:15:54.803 "claim_type": "exclusive_write", 00:15:54.803 "zoned": false, 00:15:54.803 "supported_io_types": { 00:15:54.803 "read": true, 00:15:54.803 "write": true, 00:15:54.803 "unmap": true, 00:15:54.803 "write_zeroes": true, 00:15:54.803 "flush": true, 00:15:54.803 "reset": true, 00:15:54.803 "compare": false, 00:15:54.803 "compare_and_write": false, 00:15:54.803 "abort": true, 00:15:54.803 "nvme_admin": false, 00:15:54.803 "nvme_io": false 00:15:54.803 }, 00:15:54.803 "memory_domains": [ 00:15:54.803 { 00:15:54.803 "dma_device_id": "system", 00:15:54.803 "dma_device_type": 1 00:15:54.803 }, 00:15:54.803 { 00:15:54.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.803 "dma_device_type": 2 00:15:54.803 } 00:15:54.803 ], 00:15:54.803 "driver_specific": {} 00:15:54.803 } 00:15:54.803 ] 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.803 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.061 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.061 "name": "Existed_Raid", 00:15:55.061 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:55.061 "strip_size_kb": 0, 00:15:55.061 "state": "configuring", 00:15:55.061 "raid_level": "raid1", 00:15:55.061 "superblock": true, 00:15:55.061 "num_base_bdevs": 4, 00:15:55.061 "num_base_bdevs_discovered": 3, 00:15:55.061 "num_base_bdevs_operational": 4, 00:15:55.061 "base_bdevs_list": [ 00:15:55.061 { 00:15:55.061 "name": "BaseBdev1", 00:15:55.061 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:55.061 "is_configured": true, 00:15:55.061 "data_offset": 2048, 00:15:55.061 "data_size": 63488 00:15:55.061 }, 00:15:55.061 { 00:15:55.061 "name": "BaseBdev2", 00:15:55.062 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:55.062 "is_configured": true, 00:15:55.062 "data_offset": 2048, 00:15:55.062 "data_size": 63488 00:15:55.062 }, 00:15:55.062 { 00:15:55.062 "name": "BaseBdev3", 00:15:55.062 "uuid": "fe13dd70-2712-11ef-b084-113036b5c18d", 00:15:55.062 "is_configured": true, 00:15:55.062 "data_offset": 2048, 00:15:55.062 "data_size": 63488 00:15:55.062 }, 00:15:55.062 { 00:15:55.062 "name": "BaseBdev4", 00:15:55.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.062 "is_configured": false, 00:15:55.062 "data_offset": 0, 00:15:55.062 "data_size": 0 00:15:55.062 } 00:15:55.062 ] 00:15:55.062 }' 00:15:55.062 10:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.062 10:20:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.627 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:55.627 [2024-06-10 10:20:01.218569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.627 [2024-06-10 10:20:01.218632] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e04ba00 00:15:55.627 [2024-06-10 10:20:01.218638] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.627 [2024-06-10 10:20:01.218657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e0aeec0 00:15:55.627 [2024-06-10 10:20:01.218698] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e04ba00 00:15:55.627 [2024-06-10 10:20:01.218702] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e04ba00 00:15:55.627 [2024-06-10 10:20:01.218732] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.627 BaseBdev4 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.884 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.142 [ 00:15:56.142 { 00:15:56.142 "name": "BaseBdev4", 00:15:56.142 "aliases": [ 00:15:56.142 "fef5b4b2-2712-11ef-b084-113036b5c18d" 00:15:56.142 ], 00:15:56.142 "product_name": "Malloc disk", 00:15:56.142 "block_size": 512, 00:15:56.142 "num_blocks": 65536, 00:15:56.142 "uuid": "fef5b4b2-2712-11ef-b084-113036b5c18d", 00:15:56.142 "assigned_rate_limits": { 00:15:56.142 "rw_ios_per_sec": 0, 00:15:56.142 "rw_mbytes_per_sec": 0, 00:15:56.142 "r_mbytes_per_sec": 0, 00:15:56.142 "w_mbytes_per_sec": 0 00:15:56.142 }, 00:15:56.142 "claimed": true, 00:15:56.142 "claim_type": "exclusive_write", 00:15:56.142 "zoned": false, 00:15:56.142 "supported_io_types": { 00:15:56.142 "read": true, 00:15:56.142 "write": true, 00:15:56.142 "unmap": true, 00:15:56.142 "write_zeroes": true, 00:15:56.142 "flush": true, 00:15:56.142 "reset": true, 00:15:56.142 "compare": false, 00:15:56.142 "compare_and_write": false, 00:15:56.142 "abort": true, 00:15:56.142 "nvme_admin": false, 00:15:56.142 "nvme_io": false 00:15:56.142 }, 00:15:56.142 "memory_domains": [ 00:15:56.142 { 00:15:56.142 "dma_device_id": "system", 00:15:56.142 "dma_device_type": 1 00:15:56.142 }, 00:15:56.142 { 00:15:56.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.142 "dma_device_type": 2 00:15:56.142 } 00:15:56.142 ], 00:15:56.142 "driver_specific": {} 00:15:56.142 } 00:15:56.142 ] 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.400 "name": "Existed_Raid", 00:15:56.400 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:56.400 "strip_size_kb": 0, 00:15:56.400 "state": "online", 00:15:56.400 "raid_level": "raid1", 00:15:56.400 "superblock": true, 00:15:56.400 "num_base_bdevs": 4, 00:15:56.400 "num_base_bdevs_discovered": 4, 00:15:56.400 "num_base_bdevs_operational": 4, 00:15:56.400 "base_bdevs_list": [ 00:15:56.400 { 00:15:56.400 "name": "BaseBdev1", 00:15:56.400 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:56.400 "is_configured": true, 00:15:56.400 "data_offset": 2048, 00:15:56.400 "data_size": 63488 00:15:56.400 }, 00:15:56.400 { 00:15:56.400 "name": "BaseBdev2", 00:15:56.400 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:56.400 "is_configured": true, 00:15:56.400 "data_offset": 2048, 00:15:56.400 "data_size": 63488 00:15:56.400 }, 00:15:56.400 { 00:15:56.400 "name": "BaseBdev3", 00:15:56.400 "uuid": "fe13dd70-2712-11ef-b084-113036b5c18d", 00:15:56.400 "is_configured": true, 00:15:56.400 "data_offset": 2048, 00:15:56.400 "data_size": 63488 00:15:56.400 }, 00:15:56.400 { 00:15:56.400 "name": "BaseBdev4", 00:15:56.400 "uuid": "fef5b4b2-2712-11ef-b084-113036b5c18d", 00:15:56.400 "is_configured": true, 00:15:56.400 "data_offset": 2048, 00:15:56.400 "data_size": 63488 00:15:56.400 } 00:15:56.400 ] 00:15:56.400 }' 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.400 10:20:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:56.966 [2024-06-10 10:20:02.470544] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.966 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:56.966 "name": "Existed_Raid", 00:15:56.966 "aliases": [ 00:15:56.966 "fcd98180-2712-11ef-b084-113036b5c18d" 00:15:56.966 ], 00:15:56.966 "product_name": "Raid Volume", 00:15:56.966 "block_size": 512, 00:15:56.966 "num_blocks": 63488, 00:15:56.966 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:56.966 "assigned_rate_limits": { 00:15:56.966 "rw_ios_per_sec": 0, 00:15:56.966 "rw_mbytes_per_sec": 0, 00:15:56.966 "r_mbytes_per_sec": 0, 00:15:56.966 "w_mbytes_per_sec": 0 00:15:56.966 }, 00:15:56.966 "claimed": false, 00:15:56.966 "zoned": false, 00:15:56.966 "supported_io_types": { 00:15:56.966 "read": true, 00:15:56.966 "write": true, 00:15:56.966 "unmap": false, 00:15:56.966 "write_zeroes": true, 00:15:56.966 "flush": false, 00:15:56.966 "reset": true, 00:15:56.966 "compare": false, 00:15:56.966 "compare_and_write": false, 00:15:56.966 "abort": false, 00:15:56.966 "nvme_admin": false, 00:15:56.966 "nvme_io": false 00:15:56.966 }, 00:15:56.966 "memory_domains": [ 00:15:56.966 { 00:15:56.966 "dma_device_id": "system", 00:15:56.966 "dma_device_type": 1 00:15:56.966 }, 00:15:56.966 { 00:15:56.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.966 "dma_device_type": 2 00:15:56.966 }, 00:15:56.966 { 00:15:56.966 "dma_device_id": "system", 00:15:56.966 "dma_device_type": 1 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.967 "dma_device_type": 2 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "dma_device_id": "system", 00:15:56.967 "dma_device_type": 1 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.967 "dma_device_type": 2 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "dma_device_id": "system", 00:15:56.967 "dma_device_type": 1 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.967 "dma_device_type": 2 00:15:56.967 } 00:15:56.967 ], 00:15:56.967 "driver_specific": { 00:15:56.967 "raid": { 00:15:56.967 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:56.967 "strip_size_kb": 0, 00:15:56.967 "state": "online", 00:15:56.967 "raid_level": "raid1", 00:15:56.967 "superblock": true, 00:15:56.967 "num_base_bdevs": 4, 00:15:56.967 "num_base_bdevs_discovered": 4, 00:15:56.967 "num_base_bdevs_operational": 4, 00:15:56.967 "base_bdevs_list": [ 00:15:56.967 { 00:15:56.967 "name": "BaseBdev1", 00:15:56.967 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "name": "BaseBdev2", 00:15:56.967 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "name": "BaseBdev3", 00:15:56.967 "uuid": "fe13dd70-2712-11ef-b084-113036b5c18d", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 }, 00:15:56.967 { 00:15:56.967 "name": "BaseBdev4", 00:15:56.967 "uuid": "fef5b4b2-2712-11ef-b084-113036b5c18d", 00:15:56.967 "is_configured": true, 00:15:56.967 "data_offset": 2048, 00:15:56.967 "data_size": 63488 00:15:56.967 } 00:15:56.967 ] 00:15:56.967 } 00:15:56.967 } 00:15:56.967 }' 00:15:56.967 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.967 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:56.967 BaseBdev2 00:15:56.967 BaseBdev3 00:15:56.967 BaseBdev4' 00:15:56.967 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.967 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:56.967 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.225 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.225 "name": "BaseBdev1", 00:15:57.225 "aliases": [ 00:15:57.225 "fbd57c15-2712-11ef-b084-113036b5c18d" 00:15:57.225 ], 00:15:57.225 "product_name": "Malloc disk", 00:15:57.225 "block_size": 512, 00:15:57.225 "num_blocks": 65536, 00:15:57.225 "uuid": "fbd57c15-2712-11ef-b084-113036b5c18d", 00:15:57.225 "assigned_rate_limits": { 00:15:57.225 "rw_ios_per_sec": 0, 00:15:57.225 "rw_mbytes_per_sec": 0, 00:15:57.225 "r_mbytes_per_sec": 0, 00:15:57.225 "w_mbytes_per_sec": 0 00:15:57.225 }, 00:15:57.225 "claimed": true, 00:15:57.225 "claim_type": "exclusive_write", 00:15:57.225 "zoned": false, 00:15:57.225 "supported_io_types": { 00:15:57.225 "read": true, 00:15:57.225 "write": true, 00:15:57.225 "unmap": true, 00:15:57.225 "write_zeroes": true, 00:15:57.225 "flush": true, 00:15:57.225 "reset": true, 00:15:57.225 "compare": false, 00:15:57.225 "compare_and_write": false, 00:15:57.225 "abort": true, 00:15:57.225 "nvme_admin": false, 00:15:57.225 "nvme_io": false 00:15:57.225 }, 00:15:57.225 "memory_domains": [ 00:15:57.226 { 00:15:57.226 "dma_device_id": "system", 00:15:57.226 "dma_device_type": 1 00:15:57.226 }, 00:15:57.226 { 00:15:57.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.226 "dma_device_type": 2 00:15:57.226 } 00:15:57.226 ], 00:15:57.226 "driver_specific": {} 00:15:57.226 }' 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:57.226 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.484 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.484 "name": "BaseBdev2", 00:15:57.484 "aliases": [ 00:15:57.484 "fd52fa05-2712-11ef-b084-113036b5c18d" 00:15:57.484 ], 00:15:57.484 "product_name": "Malloc disk", 00:15:57.484 "block_size": 512, 00:15:57.484 "num_blocks": 65536, 00:15:57.484 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:57.484 "assigned_rate_limits": { 00:15:57.484 "rw_ios_per_sec": 0, 00:15:57.484 "rw_mbytes_per_sec": 0, 00:15:57.484 "r_mbytes_per_sec": 0, 00:15:57.484 "w_mbytes_per_sec": 0 00:15:57.484 }, 00:15:57.484 "claimed": true, 00:15:57.484 "claim_type": "exclusive_write", 00:15:57.484 "zoned": false, 00:15:57.484 "supported_io_types": { 00:15:57.484 "read": true, 00:15:57.484 "write": true, 00:15:57.484 "unmap": true, 00:15:57.484 "write_zeroes": true, 00:15:57.484 "flush": true, 00:15:57.484 "reset": true, 00:15:57.484 "compare": false, 00:15:57.484 "compare_and_write": false, 00:15:57.484 "abort": true, 00:15:57.484 "nvme_admin": false, 00:15:57.484 "nvme_io": false 00:15:57.484 }, 00:15:57.484 "memory_domains": [ 00:15:57.484 { 00:15:57.484 "dma_device_id": "system", 00:15:57.484 "dma_device_type": 1 00:15:57.484 }, 00:15:57.484 { 00:15:57.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.484 "dma_device_type": 2 00:15:57.484 } 00:15:57.484 ], 00:15:57.484 "driver_specific": {} 00:15:57.484 }' 00:15:57.484 10:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:57.484 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.742 "name": "BaseBdev3", 00:15:57.742 "aliases": [ 00:15:57.742 "fe13dd70-2712-11ef-b084-113036b5c18d" 00:15:57.742 ], 00:15:57.742 "product_name": "Malloc disk", 00:15:57.742 "block_size": 512, 00:15:57.742 "num_blocks": 65536, 00:15:57.742 "uuid": "fe13dd70-2712-11ef-b084-113036b5c18d", 00:15:57.742 "assigned_rate_limits": { 00:15:57.742 "rw_ios_per_sec": 0, 00:15:57.742 "rw_mbytes_per_sec": 0, 00:15:57.742 "r_mbytes_per_sec": 0, 00:15:57.742 "w_mbytes_per_sec": 0 00:15:57.742 }, 00:15:57.742 "claimed": true, 00:15:57.742 "claim_type": "exclusive_write", 00:15:57.742 "zoned": false, 00:15:57.742 "supported_io_types": { 00:15:57.742 "read": true, 00:15:57.742 "write": true, 00:15:57.742 "unmap": true, 00:15:57.742 "write_zeroes": true, 00:15:57.742 "flush": true, 00:15:57.742 "reset": true, 00:15:57.742 "compare": false, 00:15:57.742 "compare_and_write": false, 00:15:57.742 "abort": true, 00:15:57.742 "nvme_admin": false, 00:15:57.742 "nvme_io": false 00:15:57.742 }, 00:15:57.742 "memory_domains": [ 00:15:57.742 { 00:15:57.742 "dma_device_id": "system", 00:15:57.742 "dma_device_type": 1 00:15:57.742 }, 00:15:57.742 { 00:15:57.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.742 "dma_device_type": 2 00:15:57.742 } 00:15:57.742 ], 00:15:57.742 "driver_specific": {} 00:15:57.742 }' 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.742 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.009 "name": "BaseBdev4", 00:15:58.009 "aliases": [ 00:15:58.009 "fef5b4b2-2712-11ef-b084-113036b5c18d" 00:15:58.009 ], 00:15:58.009 "product_name": "Malloc disk", 00:15:58.009 "block_size": 512, 00:15:58.009 "num_blocks": 65536, 00:15:58.009 "uuid": "fef5b4b2-2712-11ef-b084-113036b5c18d", 00:15:58.009 "assigned_rate_limits": { 00:15:58.009 "rw_ios_per_sec": 0, 00:15:58.009 "rw_mbytes_per_sec": 0, 00:15:58.009 "r_mbytes_per_sec": 0, 00:15:58.009 "w_mbytes_per_sec": 0 00:15:58.009 }, 00:15:58.009 "claimed": true, 00:15:58.009 "claim_type": "exclusive_write", 00:15:58.009 "zoned": false, 00:15:58.009 "supported_io_types": { 00:15:58.009 "read": true, 00:15:58.009 "write": true, 00:15:58.009 "unmap": true, 00:15:58.009 "write_zeroes": true, 00:15:58.009 "flush": true, 00:15:58.009 "reset": true, 00:15:58.009 "compare": false, 00:15:58.009 "compare_and_write": false, 00:15:58.009 "abort": true, 00:15:58.009 "nvme_admin": false, 00:15:58.009 "nvme_io": false 00:15:58.009 }, 00:15:58.009 "memory_domains": [ 00:15:58.009 { 00:15:58.009 "dma_device_id": "system", 00:15:58.009 "dma_device_type": 1 00:15:58.009 }, 00:15:58.009 { 00:15:58.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.009 "dma_device_type": 2 00:15:58.009 } 00:15:58.009 ], 00:15:58.009 "driver_specific": {} 00:15:58.009 }' 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.009 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:58.272 [2024-06-10 10:20:03.826578] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.272 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.273 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.273 10:20:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.839 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.840 "name": "Existed_Raid", 00:15:58.840 "uuid": "fcd98180-2712-11ef-b084-113036b5c18d", 00:15:58.840 "strip_size_kb": 0, 00:15:58.840 "state": "online", 00:15:58.840 "raid_level": "raid1", 00:15:58.840 "superblock": true, 00:15:58.840 "num_base_bdevs": 4, 00:15:58.840 "num_base_bdevs_discovered": 3, 00:15:58.840 "num_base_bdevs_operational": 3, 00:15:58.840 "base_bdevs_list": [ 00:15:58.840 { 00:15:58.840 "name": null, 00:15:58.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.840 "is_configured": false, 00:15:58.840 "data_offset": 2048, 00:15:58.840 "data_size": 63488 00:15:58.840 }, 00:15:58.840 { 00:15:58.840 "name": "BaseBdev2", 00:15:58.840 "uuid": "fd52fa05-2712-11ef-b084-113036b5c18d", 00:15:58.840 "is_configured": true, 00:15:58.840 "data_offset": 2048, 00:15:58.840 "data_size": 63488 00:15:58.840 }, 00:15:58.840 { 00:15:58.840 "name": "BaseBdev3", 00:15:58.840 "uuid": "fe13dd70-2712-11ef-b084-113036b5c18d", 00:15:58.840 "is_configured": true, 00:15:58.840 "data_offset": 2048, 00:15:58.840 "data_size": 63488 00:15:58.840 }, 00:15:58.840 { 00:15:58.840 "name": "BaseBdev4", 00:15:58.840 "uuid": "fef5b4b2-2712-11ef-b084-113036b5c18d", 00:15:58.840 "is_configured": true, 00:15:58.840 "data_offset": 2048, 00:15:58.840 "data_size": 63488 00:15:58.840 } 00:15:58.840 ] 00:15:58.840 }' 00:15:58.840 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.840 10:20:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.097 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:59.097 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:59.097 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.097 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:59.355 [2024-06-10 10:20:04.939478] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:59.355 10:20:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:59.921 [2024-06-10 10:20:05.424301] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.921 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.178 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.179 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.179 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:00.500 [2024-06-10 10:20:05.857101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:00.500 [2024-06-10 10:20:05.857131] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.500 [2024-06-10 10:20:05.862006] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.500 [2024-06-10 10:20:05.862023] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.500 [2024-06-10 10:20:05.862027] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e04ba00 name Existed_Raid, state offline 00:16:00.500 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.500 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.500 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.500 10:20:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.500 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:00.500 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:00.500 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:00.500 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:00.500 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:00.500 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:00.758 BaseBdev2 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:00.758 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.017 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.275 [ 00:16:01.275 { 00:16:01.275 "name": "BaseBdev2", 00:16:01.275 "aliases": [ 00:16:01.275 "02037dba-2713-11ef-b084-113036b5c18d" 00:16:01.275 ], 00:16:01.275 "product_name": "Malloc disk", 00:16:01.275 "block_size": 512, 00:16:01.275 "num_blocks": 65536, 00:16:01.275 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:01.275 "assigned_rate_limits": { 00:16:01.275 "rw_ios_per_sec": 0, 00:16:01.275 "rw_mbytes_per_sec": 0, 00:16:01.275 "r_mbytes_per_sec": 0, 00:16:01.275 "w_mbytes_per_sec": 0 00:16:01.275 }, 00:16:01.275 "claimed": false, 00:16:01.275 "zoned": false, 00:16:01.275 "supported_io_types": { 00:16:01.275 "read": true, 00:16:01.275 "write": true, 00:16:01.275 "unmap": true, 00:16:01.275 "write_zeroes": true, 00:16:01.275 "flush": true, 00:16:01.275 "reset": true, 00:16:01.275 "compare": false, 00:16:01.275 "compare_and_write": false, 00:16:01.275 "abort": true, 00:16:01.275 "nvme_admin": false, 00:16:01.275 "nvme_io": false 00:16:01.275 }, 00:16:01.275 "memory_domains": [ 00:16:01.275 { 00:16:01.275 "dma_device_id": "system", 00:16:01.275 "dma_device_type": 1 00:16:01.275 }, 00:16:01.275 { 00:16:01.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.275 "dma_device_type": 2 00:16:01.275 } 00:16:01.275 ], 00:16:01.275 "driver_specific": {} 00:16:01.275 } 00:16:01.275 ] 00:16:01.275 10:20:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:01.275 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:01.275 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:01.275 10:20:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:01.532 BaseBdev3 00:16:01.532 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:01.532 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev3 00:16:01.532 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:01.532 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:01.789 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:01.789 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:01.789 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.789 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:02.048 [ 00:16:02.048 { 00:16:02.048 "name": "BaseBdev3", 00:16:02.048 "aliases": [ 00:16:02.048 "02794c29-2713-11ef-b084-113036b5c18d" 00:16:02.048 ], 00:16:02.048 "product_name": "Malloc disk", 00:16:02.048 "block_size": 512, 00:16:02.048 "num_blocks": 65536, 00:16:02.048 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:02.048 "assigned_rate_limits": { 00:16:02.048 "rw_ios_per_sec": 0, 00:16:02.048 "rw_mbytes_per_sec": 0, 00:16:02.048 "r_mbytes_per_sec": 0, 00:16:02.048 "w_mbytes_per_sec": 0 00:16:02.048 }, 00:16:02.048 "claimed": false, 00:16:02.048 "zoned": false, 00:16:02.048 "supported_io_types": { 00:16:02.048 "read": true, 00:16:02.048 "write": true, 00:16:02.048 "unmap": true, 00:16:02.048 "write_zeroes": true, 00:16:02.048 "flush": true, 00:16:02.048 "reset": true, 00:16:02.048 "compare": false, 00:16:02.048 "compare_and_write": false, 00:16:02.048 "abort": true, 00:16:02.048 "nvme_admin": false, 00:16:02.048 "nvme_io": false 00:16:02.048 }, 00:16:02.048 "memory_domains": [ 00:16:02.048 { 00:16:02.048 "dma_device_id": "system", 00:16:02.048 "dma_device_type": 1 00:16:02.048 }, 00:16:02.048 { 00:16:02.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.048 "dma_device_type": 2 00:16:02.048 } 00:16:02.048 ], 00:16:02.048 "driver_specific": {} 00:16:02.048 } 00:16:02.048 ] 00:16:02.048 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:02.048 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:02.048 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:02.048 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:02.306 BaseBdev4 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev4 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:02.306 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.565 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:02.823 [ 00:16:02.823 { 00:16:02.823 "name": "BaseBdev4", 00:16:02.823 "aliases": [ 00:16:02.823 "02e99b26-2713-11ef-b084-113036b5c18d" 00:16:02.823 ], 00:16:02.823 "product_name": "Malloc disk", 00:16:02.823 "block_size": 512, 00:16:02.823 "num_blocks": 65536, 00:16:02.823 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:02.823 "assigned_rate_limits": { 00:16:02.823 "rw_ios_per_sec": 0, 00:16:02.823 "rw_mbytes_per_sec": 0, 00:16:02.823 "r_mbytes_per_sec": 0, 00:16:02.823 "w_mbytes_per_sec": 0 00:16:02.823 }, 00:16:02.823 "claimed": false, 00:16:02.823 "zoned": false, 00:16:02.823 "supported_io_types": { 00:16:02.823 "read": true, 00:16:02.823 "write": true, 00:16:02.823 "unmap": true, 00:16:02.823 "write_zeroes": true, 00:16:02.823 "flush": true, 00:16:02.823 "reset": true, 00:16:02.823 "compare": false, 00:16:02.823 "compare_and_write": false, 00:16:02.823 "abort": true, 00:16:02.823 "nvme_admin": false, 00:16:02.823 "nvme_io": false 00:16:02.823 }, 00:16:02.823 "memory_domains": [ 00:16:02.823 { 00:16:02.823 "dma_device_id": "system", 00:16:02.823 "dma_device_type": 1 00:16:02.823 }, 00:16:02.823 { 00:16:02.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.823 "dma_device_type": 2 00:16:02.823 } 00:16:02.823 ], 00:16:02.823 "driver_specific": {} 00:16:02.823 } 00:16:02.823 ] 00:16:02.823 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:02.823 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:02.823 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:02.823 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:03.082 [2024-06-10 10:20:08.590082] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.082 [2024-06-10 10:20:08.590132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.082 [2024-06-10 10:20:08.590156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.082 [2024-06-10 10:20:08.590595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.082 [2024-06-10 10:20:08.590605] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.082 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.352 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.352 "name": "Existed_Raid", 00:16:03.352 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:03.352 "strip_size_kb": 0, 00:16:03.352 "state": "configuring", 00:16:03.352 "raid_level": "raid1", 00:16:03.352 "superblock": true, 00:16:03.352 "num_base_bdevs": 4, 00:16:03.352 "num_base_bdevs_discovered": 3, 00:16:03.352 "num_base_bdevs_operational": 4, 00:16:03.352 "base_bdevs_list": [ 00:16:03.352 { 00:16:03.352 "name": "BaseBdev1", 00:16:03.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.352 "is_configured": false, 00:16:03.352 "data_offset": 0, 00:16:03.352 "data_size": 0 00:16:03.352 }, 00:16:03.352 { 00:16:03.352 "name": "BaseBdev2", 00:16:03.352 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:03.352 "is_configured": true, 00:16:03.352 "data_offset": 2048, 00:16:03.352 "data_size": 63488 00:16:03.352 }, 00:16:03.352 { 00:16:03.352 "name": "BaseBdev3", 00:16:03.352 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:03.352 "is_configured": true, 00:16:03.352 "data_offset": 2048, 00:16:03.352 "data_size": 63488 00:16:03.352 }, 00:16:03.352 { 00:16:03.352 "name": "BaseBdev4", 00:16:03.352 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:03.352 "is_configured": true, 00:16:03.352 "data_offset": 2048, 00:16:03.352 "data_size": 63488 00:16:03.352 } 00:16:03.352 ] 00:16:03.352 }' 00:16:03.352 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.352 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.611 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:03.868 [2024-06-10 10:20:09.390113] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:03.868 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.869 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.127 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.127 "name": "Existed_Raid", 00:16:04.127 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:04.127 "strip_size_kb": 0, 00:16:04.127 "state": "configuring", 00:16:04.127 "raid_level": "raid1", 00:16:04.127 "superblock": true, 00:16:04.127 "num_base_bdevs": 4, 00:16:04.127 "num_base_bdevs_discovered": 2, 00:16:04.127 "num_base_bdevs_operational": 4, 00:16:04.127 "base_bdevs_list": [ 00:16:04.127 { 00:16:04.127 "name": "BaseBdev1", 00:16:04.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.127 "is_configured": false, 00:16:04.127 "data_offset": 0, 00:16:04.127 "data_size": 0 00:16:04.127 }, 00:16:04.127 { 00:16:04.127 "name": null, 00:16:04.127 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:04.127 "is_configured": false, 00:16:04.127 "data_offset": 2048, 00:16:04.127 "data_size": 63488 00:16:04.127 }, 00:16:04.127 { 00:16:04.127 "name": "BaseBdev3", 00:16:04.127 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:04.127 "is_configured": true, 00:16:04.127 "data_offset": 2048, 00:16:04.127 "data_size": 63488 00:16:04.127 }, 00:16:04.127 { 00:16:04.127 "name": "BaseBdev4", 00:16:04.127 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:04.127 "is_configured": true, 00:16:04.127 "data_offset": 2048, 00:16:04.127 "data_size": 63488 00:16:04.127 } 00:16:04.127 ] 00:16:04.127 }' 00:16:04.127 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.127 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.386 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.386 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:04.644 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:04.644 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.902 [2024-06-10 10:20:10.326253] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.902 BaseBdev1 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:04.902 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.160 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.418 [ 00:16:05.418 { 00:16:05.418 "name": "BaseBdev1", 00:16:05.418 "aliases": [ 00:16:05.418 "04636d6e-2713-11ef-b084-113036b5c18d" 00:16:05.418 ], 00:16:05.418 "product_name": "Malloc disk", 00:16:05.418 "block_size": 512, 00:16:05.418 "num_blocks": 65536, 00:16:05.418 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:05.418 "assigned_rate_limits": { 00:16:05.418 "rw_ios_per_sec": 0, 00:16:05.418 "rw_mbytes_per_sec": 0, 00:16:05.418 "r_mbytes_per_sec": 0, 00:16:05.418 "w_mbytes_per_sec": 0 00:16:05.418 }, 00:16:05.418 "claimed": true, 00:16:05.418 "claim_type": "exclusive_write", 00:16:05.418 "zoned": false, 00:16:05.418 "supported_io_types": { 00:16:05.418 "read": true, 00:16:05.418 "write": true, 00:16:05.418 "unmap": true, 00:16:05.418 "write_zeroes": true, 00:16:05.418 "flush": true, 00:16:05.418 "reset": true, 00:16:05.418 "compare": false, 00:16:05.418 "compare_and_write": false, 00:16:05.418 "abort": true, 00:16:05.418 "nvme_admin": false, 00:16:05.418 "nvme_io": false 00:16:05.418 }, 00:16:05.418 "memory_domains": [ 00:16:05.418 { 00:16:05.418 "dma_device_id": "system", 00:16:05.418 "dma_device_type": 1 00:16:05.418 }, 00:16:05.418 { 00:16:05.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.418 "dma_device_type": 2 00:16:05.418 } 00:16:05.418 ], 00:16:05.418 "driver_specific": {} 00:16:05.418 } 00:16:05.418 ] 00:16:05.418 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:05.418 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.418 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.419 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.678 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.678 "name": "Existed_Raid", 00:16:05.678 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:05.678 "strip_size_kb": 0, 00:16:05.678 "state": "configuring", 00:16:05.678 "raid_level": "raid1", 00:16:05.678 "superblock": true, 00:16:05.678 "num_base_bdevs": 4, 00:16:05.678 "num_base_bdevs_discovered": 3, 00:16:05.678 "num_base_bdevs_operational": 4, 00:16:05.678 "base_bdevs_list": [ 00:16:05.678 { 00:16:05.678 "name": "BaseBdev1", 00:16:05.678 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:05.678 "is_configured": true, 00:16:05.678 "data_offset": 2048, 00:16:05.678 "data_size": 63488 00:16:05.678 }, 00:16:05.678 { 00:16:05.678 "name": null, 00:16:05.678 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:05.678 "is_configured": false, 00:16:05.678 "data_offset": 2048, 00:16:05.678 "data_size": 63488 00:16:05.678 }, 00:16:05.678 { 00:16:05.678 "name": "BaseBdev3", 00:16:05.678 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:05.678 "is_configured": true, 00:16:05.678 "data_offset": 2048, 00:16:05.678 "data_size": 63488 00:16:05.678 }, 00:16:05.678 { 00:16:05.678 "name": "BaseBdev4", 00:16:05.678 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:05.678 "is_configured": true, 00:16:05.678 "data_offset": 2048, 00:16:05.678 "data_size": 63488 00:16:05.678 } 00:16:05.678 ] 00:16:05.678 }' 00:16:05.678 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.678 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.998 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.998 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:06.257 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:06.257 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:06.516 [2024-06-10 10:20:11.930202] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.516 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.772 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.772 "name": "Existed_Raid", 00:16:06.772 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:06.772 "strip_size_kb": 0, 00:16:06.772 "state": "configuring", 00:16:06.772 "raid_level": "raid1", 00:16:06.772 "superblock": true, 00:16:06.772 "num_base_bdevs": 4, 00:16:06.772 "num_base_bdevs_discovered": 2, 00:16:06.773 "num_base_bdevs_operational": 4, 00:16:06.773 "base_bdevs_list": [ 00:16:06.773 { 00:16:06.773 "name": "BaseBdev1", 00:16:06.773 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:06.773 "is_configured": true, 00:16:06.773 "data_offset": 2048, 00:16:06.773 "data_size": 63488 00:16:06.773 }, 00:16:06.773 { 00:16:06.773 "name": null, 00:16:06.773 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:06.773 "is_configured": false, 00:16:06.773 "data_offset": 2048, 00:16:06.773 "data_size": 63488 00:16:06.773 }, 00:16:06.773 { 00:16:06.773 "name": null, 00:16:06.773 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:06.773 "is_configured": false, 00:16:06.773 "data_offset": 2048, 00:16:06.773 "data_size": 63488 00:16:06.773 }, 00:16:06.773 { 00:16:06.773 "name": "BaseBdev4", 00:16:06.773 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:06.773 "is_configured": true, 00:16:06.773 "data_offset": 2048, 00:16:06.773 "data_size": 63488 00:16:06.773 } 00:16:06.773 ] 00:16:06.773 }' 00:16:06.773 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.773 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.030 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:07.030 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.289 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:07.289 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:07.548 [2024-06-10 10:20:12.910252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.548 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.548 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.548 "name": "Existed_Raid", 00:16:07.548 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:07.548 "strip_size_kb": 0, 00:16:07.548 "state": "configuring", 00:16:07.548 "raid_level": "raid1", 00:16:07.548 "superblock": true, 00:16:07.548 "num_base_bdevs": 4, 00:16:07.548 "num_base_bdevs_discovered": 3, 00:16:07.548 "num_base_bdevs_operational": 4, 00:16:07.548 "base_bdevs_list": [ 00:16:07.548 { 00:16:07.548 "name": "BaseBdev1", 00:16:07.548 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:07.548 "is_configured": true, 00:16:07.548 "data_offset": 2048, 00:16:07.548 "data_size": 63488 00:16:07.548 }, 00:16:07.548 { 00:16:07.548 "name": null, 00:16:07.549 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:07.549 "is_configured": false, 00:16:07.549 "data_offset": 2048, 00:16:07.549 "data_size": 63488 00:16:07.549 }, 00:16:07.549 { 00:16:07.549 "name": "BaseBdev3", 00:16:07.549 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:07.549 "is_configured": true, 00:16:07.549 "data_offset": 2048, 00:16:07.549 "data_size": 63488 00:16:07.549 }, 00:16:07.549 { 00:16:07.549 "name": "BaseBdev4", 00:16:07.549 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:07.549 "is_configured": true, 00:16:07.549 "data_offset": 2048, 00:16:07.549 "data_size": 63488 00:16:07.549 } 00:16:07.549 ] 00:16:07.549 }' 00:16:07.549 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.549 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.118 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.118 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.119 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:08.119 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.378 [2024-06-10 10:20:13.874289] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.378 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.637 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.637 "name": "Existed_Raid", 00:16:08.637 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:08.637 "strip_size_kb": 0, 00:16:08.637 "state": "configuring", 00:16:08.637 "raid_level": "raid1", 00:16:08.637 "superblock": true, 00:16:08.637 "num_base_bdevs": 4, 00:16:08.638 "num_base_bdevs_discovered": 2, 00:16:08.638 "num_base_bdevs_operational": 4, 00:16:08.638 "base_bdevs_list": [ 00:16:08.638 { 00:16:08.638 "name": null, 00:16:08.638 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:08.638 "is_configured": false, 00:16:08.638 "data_offset": 2048, 00:16:08.638 "data_size": 63488 00:16:08.638 }, 00:16:08.638 { 00:16:08.638 "name": null, 00:16:08.638 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:08.638 "is_configured": false, 00:16:08.638 "data_offset": 2048, 00:16:08.638 "data_size": 63488 00:16:08.638 }, 00:16:08.638 { 00:16:08.638 "name": "BaseBdev3", 00:16:08.638 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:08.638 "is_configured": true, 00:16:08.638 "data_offset": 2048, 00:16:08.638 "data_size": 63488 00:16:08.638 }, 00:16:08.638 { 00:16:08.638 "name": "BaseBdev4", 00:16:08.638 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:08.638 "is_configured": true, 00:16:08.638 "data_offset": 2048, 00:16:08.638 "data_size": 63488 00:16:08.638 } 00:16:08.638 ] 00:16:08.638 }' 00:16:08.638 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.638 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.981 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.981 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.258 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:09.258 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:09.258 [2024-06-10 10:20:14.851249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.519 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.519 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.519 "name": "Existed_Raid", 00:16:09.519 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:09.519 "strip_size_kb": 0, 00:16:09.519 "state": "configuring", 00:16:09.519 "raid_level": "raid1", 00:16:09.519 "superblock": true, 00:16:09.519 "num_base_bdevs": 4, 00:16:09.519 "num_base_bdevs_discovered": 3, 00:16:09.519 "num_base_bdevs_operational": 4, 00:16:09.519 "base_bdevs_list": [ 00:16:09.519 { 00:16:09.519 "name": null, 00:16:09.519 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:09.519 "is_configured": false, 00:16:09.519 "data_offset": 2048, 00:16:09.519 "data_size": 63488 00:16:09.519 }, 00:16:09.519 { 00:16:09.519 "name": "BaseBdev2", 00:16:09.519 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:09.519 "is_configured": true, 00:16:09.519 "data_offset": 2048, 00:16:09.519 "data_size": 63488 00:16:09.519 }, 00:16:09.519 { 00:16:09.519 "name": "BaseBdev3", 00:16:09.519 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:09.519 "is_configured": true, 00:16:09.519 "data_offset": 2048, 00:16:09.519 "data_size": 63488 00:16:09.519 }, 00:16:09.519 { 00:16:09.519 "name": "BaseBdev4", 00:16:09.519 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:09.519 "is_configured": true, 00:16:09.519 "data_offset": 2048, 00:16:09.519 "data_size": 63488 00:16:09.519 } 00:16:09.519 ] 00:16:09.519 }' 00:16:09.519 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.519 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.779 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.779 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:10.038 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:10.038 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.038 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:10.297 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 04636d6e-2713-11ef-b084-113036b5c18d 00:16:10.556 [2024-06-10 10:20:16.151408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:10.556 [2024-06-10 10:20:16.151457] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e04bf00 00:16:10.556 [2024-06-10 10:20:16.151462] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:10.556 [2024-06-10 10:20:16.151488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e0aee20 00:16:10.556 [2024-06-10 10:20:16.151542] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e04bf00 00:16:10.556 [2024-06-10 10:20:16.151546] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82e04bf00 00:16:10.556 [2024-06-10 10:20:16.151591] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.556 NewBaseBdev 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_name=NewBaseBdev 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local i 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:10.815 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.074 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:11.333 [ 00:16:11.333 { 00:16:11.333 "name": "NewBaseBdev", 00:16:11.333 "aliases": [ 00:16:11.333 "04636d6e-2713-11ef-b084-113036b5c18d" 00:16:11.333 ], 00:16:11.333 "product_name": "Malloc disk", 00:16:11.333 "block_size": 512, 00:16:11.333 "num_blocks": 65536, 00:16:11.333 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:11.333 "assigned_rate_limits": { 00:16:11.333 "rw_ios_per_sec": 0, 00:16:11.333 "rw_mbytes_per_sec": 0, 00:16:11.333 "r_mbytes_per_sec": 0, 00:16:11.333 "w_mbytes_per_sec": 0 00:16:11.333 }, 00:16:11.333 "claimed": true, 00:16:11.333 "claim_type": "exclusive_write", 00:16:11.333 "zoned": false, 00:16:11.333 "supported_io_types": { 00:16:11.333 "read": true, 00:16:11.333 "write": true, 00:16:11.333 "unmap": true, 00:16:11.333 "write_zeroes": true, 00:16:11.333 "flush": true, 00:16:11.333 "reset": true, 00:16:11.333 "compare": false, 00:16:11.333 "compare_and_write": false, 00:16:11.333 "abort": true, 00:16:11.333 "nvme_admin": false, 00:16:11.333 "nvme_io": false 00:16:11.333 }, 00:16:11.333 "memory_domains": [ 00:16:11.333 { 00:16:11.333 "dma_device_id": "system", 00:16:11.333 "dma_device_type": 1 00:16:11.333 }, 00:16:11.333 { 00:16:11.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.333 "dma_device_type": 2 00:16:11.333 } 00:16:11.333 ], 00:16:11.333 "driver_specific": {} 00:16:11.333 } 00:16:11.333 ] 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # return 0 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.333 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.669 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.669 "name": "Existed_Raid", 00:16:11.669 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:11.669 "strip_size_kb": 0, 00:16:11.669 "state": "online", 00:16:11.669 "raid_level": "raid1", 00:16:11.669 "superblock": true, 00:16:11.669 "num_base_bdevs": 4, 00:16:11.669 "num_base_bdevs_discovered": 4, 00:16:11.669 "num_base_bdevs_operational": 4, 00:16:11.669 "base_bdevs_list": [ 00:16:11.669 { 00:16:11.669 "name": "NewBaseBdev", 00:16:11.669 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:11.669 "is_configured": true, 00:16:11.669 "data_offset": 2048, 00:16:11.669 "data_size": 63488 00:16:11.669 }, 00:16:11.669 { 00:16:11.669 "name": "BaseBdev2", 00:16:11.669 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:11.669 "is_configured": true, 00:16:11.669 "data_offset": 2048, 00:16:11.669 "data_size": 63488 00:16:11.669 }, 00:16:11.669 { 00:16:11.669 "name": "BaseBdev3", 00:16:11.669 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:11.669 "is_configured": true, 00:16:11.669 "data_offset": 2048, 00:16:11.669 "data_size": 63488 00:16:11.669 }, 00:16:11.669 { 00:16:11.669 "name": "BaseBdev4", 00:16:11.669 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:11.669 "is_configured": true, 00:16:11.669 "data_offset": 2048, 00:16:11.669 "data_size": 63488 00:16:11.669 } 00:16:11.669 ] 00:16:11.669 }' 00:16:11.669 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.669 10:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:11.928 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:12.186 [2024-06-10 10:20:17.563426] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.186 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:12.186 "name": "Existed_Raid", 00:16:12.186 "aliases": [ 00:16:12.186 "035a85de-2713-11ef-b084-113036b5c18d" 00:16:12.186 ], 00:16:12.186 "product_name": "Raid Volume", 00:16:12.186 "block_size": 512, 00:16:12.186 "num_blocks": 63488, 00:16:12.186 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:12.186 "assigned_rate_limits": { 00:16:12.186 "rw_ios_per_sec": 0, 00:16:12.186 "rw_mbytes_per_sec": 0, 00:16:12.186 "r_mbytes_per_sec": 0, 00:16:12.186 "w_mbytes_per_sec": 0 00:16:12.186 }, 00:16:12.186 "claimed": false, 00:16:12.186 "zoned": false, 00:16:12.186 "supported_io_types": { 00:16:12.186 "read": true, 00:16:12.186 "write": true, 00:16:12.186 "unmap": false, 00:16:12.186 "write_zeroes": true, 00:16:12.186 "flush": false, 00:16:12.186 "reset": true, 00:16:12.186 "compare": false, 00:16:12.186 "compare_and_write": false, 00:16:12.186 "abort": false, 00:16:12.186 "nvme_admin": false, 00:16:12.186 "nvme_io": false 00:16:12.186 }, 00:16:12.186 "memory_domains": [ 00:16:12.186 { 00:16:12.186 "dma_device_id": "system", 00:16:12.186 "dma_device_type": 1 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.186 "dma_device_type": 2 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "system", 00:16:12.186 "dma_device_type": 1 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.186 "dma_device_type": 2 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "system", 00:16:12.186 "dma_device_type": 1 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.186 "dma_device_type": 2 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "system", 00:16:12.186 "dma_device_type": 1 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.186 "dma_device_type": 2 00:16:12.186 } 00:16:12.186 ], 00:16:12.186 "driver_specific": { 00:16:12.186 "raid": { 00:16:12.186 "uuid": "035a85de-2713-11ef-b084-113036b5c18d", 00:16:12.186 "strip_size_kb": 0, 00:16:12.186 "state": "online", 00:16:12.186 "raid_level": "raid1", 00:16:12.186 "superblock": true, 00:16:12.186 "num_base_bdevs": 4, 00:16:12.186 "num_base_bdevs_discovered": 4, 00:16:12.186 "num_base_bdevs_operational": 4, 00:16:12.186 "base_bdevs_list": [ 00:16:12.186 { 00:16:12.186 "name": "NewBaseBdev", 00:16:12.186 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:12.186 "is_configured": true, 00:16:12.186 "data_offset": 2048, 00:16:12.186 "data_size": 63488 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "name": "BaseBdev2", 00:16:12.186 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:12.186 "is_configured": true, 00:16:12.186 "data_offset": 2048, 00:16:12.186 "data_size": 63488 00:16:12.186 }, 00:16:12.186 { 00:16:12.186 "name": "BaseBdev3", 00:16:12.186 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:12.186 "is_configured": true, 00:16:12.186 "data_offset": 2048, 00:16:12.186 "data_size": 63488 00:16:12.186 }, 00:16:12.186 { 00:16:12.187 "name": "BaseBdev4", 00:16:12.187 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:12.187 "is_configured": true, 00:16:12.187 "data_offset": 2048, 00:16:12.187 "data_size": 63488 00:16:12.187 } 00:16:12.187 ] 00:16:12.187 } 00:16:12.187 } 00:16:12.187 }' 00:16:12.187 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.187 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:12.187 BaseBdev2 00:16:12.187 BaseBdev3 00:16:12.187 BaseBdev4' 00:16:12.187 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.187 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:12.187 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.447 "name": "NewBaseBdev", 00:16:12.447 "aliases": [ 00:16:12.447 "04636d6e-2713-11ef-b084-113036b5c18d" 00:16:12.447 ], 00:16:12.447 "product_name": "Malloc disk", 00:16:12.447 "block_size": 512, 00:16:12.447 "num_blocks": 65536, 00:16:12.447 "uuid": "04636d6e-2713-11ef-b084-113036b5c18d", 00:16:12.447 "assigned_rate_limits": { 00:16:12.447 "rw_ios_per_sec": 0, 00:16:12.447 "rw_mbytes_per_sec": 0, 00:16:12.447 "r_mbytes_per_sec": 0, 00:16:12.447 "w_mbytes_per_sec": 0 00:16:12.447 }, 00:16:12.447 "claimed": true, 00:16:12.447 "claim_type": "exclusive_write", 00:16:12.447 "zoned": false, 00:16:12.447 "supported_io_types": { 00:16:12.447 "read": true, 00:16:12.447 "write": true, 00:16:12.447 "unmap": true, 00:16:12.447 "write_zeroes": true, 00:16:12.447 "flush": true, 00:16:12.447 "reset": true, 00:16:12.447 "compare": false, 00:16:12.447 "compare_and_write": false, 00:16:12.447 "abort": true, 00:16:12.447 "nvme_admin": false, 00:16:12.447 "nvme_io": false 00:16:12.447 }, 00:16:12.447 "memory_domains": [ 00:16:12.447 { 00:16:12.447 "dma_device_id": "system", 00:16:12.447 "dma_device_type": 1 00:16:12.447 }, 00:16:12.447 { 00:16:12.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.447 "dma_device_type": 2 00:16:12.447 } 00:16:12.447 ], 00:16:12.447 "driver_specific": {} 00:16:12.447 }' 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:12.447 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.707 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.707 "name": "BaseBdev2", 00:16:12.707 "aliases": [ 00:16:12.707 "02037dba-2713-11ef-b084-113036b5c18d" 00:16:12.707 ], 00:16:12.707 "product_name": "Malloc disk", 00:16:12.707 "block_size": 512, 00:16:12.708 "num_blocks": 65536, 00:16:12.708 "uuid": "02037dba-2713-11ef-b084-113036b5c18d", 00:16:12.708 "assigned_rate_limits": { 00:16:12.708 "rw_ios_per_sec": 0, 00:16:12.708 "rw_mbytes_per_sec": 0, 00:16:12.708 "r_mbytes_per_sec": 0, 00:16:12.708 "w_mbytes_per_sec": 0 00:16:12.708 }, 00:16:12.708 "claimed": true, 00:16:12.708 "claim_type": "exclusive_write", 00:16:12.708 "zoned": false, 00:16:12.708 "supported_io_types": { 00:16:12.708 "read": true, 00:16:12.708 "write": true, 00:16:12.708 "unmap": true, 00:16:12.708 "write_zeroes": true, 00:16:12.708 "flush": true, 00:16:12.708 "reset": true, 00:16:12.708 "compare": false, 00:16:12.708 "compare_and_write": false, 00:16:12.708 "abort": true, 00:16:12.708 "nvme_admin": false, 00:16:12.708 "nvme_io": false 00:16:12.708 }, 00:16:12.708 "memory_domains": [ 00:16:12.708 { 00:16:12.708 "dma_device_id": "system", 00:16:12.708 "dma_device_type": 1 00:16:12.708 }, 00:16:12.708 { 00:16:12.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.708 "dma_device_type": 2 00:16:12.708 } 00:16:12.708 ], 00:16:12.708 "driver_specific": {} 00:16:12.708 }' 00:16:12.708 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.708 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.708 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.708 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.708 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:12.968 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:13.227 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:13.227 "name": "BaseBdev3", 00:16:13.227 "aliases": [ 00:16:13.227 "02794c29-2713-11ef-b084-113036b5c18d" 00:16:13.227 ], 00:16:13.227 "product_name": "Malloc disk", 00:16:13.227 "block_size": 512, 00:16:13.227 "num_blocks": 65536, 00:16:13.227 "uuid": "02794c29-2713-11ef-b084-113036b5c18d", 00:16:13.227 "assigned_rate_limits": { 00:16:13.227 "rw_ios_per_sec": 0, 00:16:13.227 "rw_mbytes_per_sec": 0, 00:16:13.227 "r_mbytes_per_sec": 0, 00:16:13.227 "w_mbytes_per_sec": 0 00:16:13.227 }, 00:16:13.227 "claimed": true, 00:16:13.227 "claim_type": "exclusive_write", 00:16:13.227 "zoned": false, 00:16:13.227 "supported_io_types": { 00:16:13.227 "read": true, 00:16:13.227 "write": true, 00:16:13.227 "unmap": true, 00:16:13.228 "write_zeroes": true, 00:16:13.228 "flush": true, 00:16:13.228 "reset": true, 00:16:13.228 "compare": false, 00:16:13.228 "compare_and_write": false, 00:16:13.228 "abort": true, 00:16:13.228 "nvme_admin": false, 00:16:13.228 "nvme_io": false 00:16:13.228 }, 00:16:13.228 "memory_domains": [ 00:16:13.228 { 00:16:13.228 "dma_device_id": "system", 00:16:13.228 "dma_device_type": 1 00:16:13.228 }, 00:16:13.228 { 00:16:13.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.228 "dma_device_type": 2 00:16:13.228 } 00:16:13.228 ], 00:16:13.228 "driver_specific": {} 00:16:13.228 }' 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:13.228 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:13.488 "name": "BaseBdev4", 00:16:13.488 "aliases": [ 00:16:13.488 "02e99b26-2713-11ef-b084-113036b5c18d" 00:16:13.488 ], 00:16:13.488 "product_name": "Malloc disk", 00:16:13.488 "block_size": 512, 00:16:13.488 "num_blocks": 65536, 00:16:13.488 "uuid": "02e99b26-2713-11ef-b084-113036b5c18d", 00:16:13.488 "assigned_rate_limits": { 00:16:13.488 "rw_ios_per_sec": 0, 00:16:13.488 "rw_mbytes_per_sec": 0, 00:16:13.488 "r_mbytes_per_sec": 0, 00:16:13.488 "w_mbytes_per_sec": 0 00:16:13.488 }, 00:16:13.488 "claimed": true, 00:16:13.488 "claim_type": "exclusive_write", 00:16:13.488 "zoned": false, 00:16:13.488 "supported_io_types": { 00:16:13.488 "read": true, 00:16:13.488 "write": true, 00:16:13.488 "unmap": true, 00:16:13.488 "write_zeroes": true, 00:16:13.488 "flush": true, 00:16:13.488 "reset": true, 00:16:13.488 "compare": false, 00:16:13.488 "compare_and_write": false, 00:16:13.488 "abort": true, 00:16:13.488 "nvme_admin": false, 00:16:13.488 "nvme_io": false 00:16:13.488 }, 00:16:13.488 "memory_domains": [ 00:16:13.488 { 00:16:13.488 "dma_device_id": "system", 00:16:13.488 "dma_device_type": 1 00:16:13.488 }, 00:16:13.488 { 00:16:13.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.488 "dma_device_type": 2 00:16:13.488 } 00:16:13.488 ], 00:16:13.488 "driver_specific": {} 00:16:13.488 }' 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.488 10:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:13.488 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:13.747 [2024-06-10 10:20:19.195459] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.747 [2024-06-10 10:20:19.195482] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.747 [2024-06-10 10:20:19.195499] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.747 [2024-06-10 10:20:19.195592] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.747 [2024-06-10 10:20:19.195608] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e04bf00 name Existed_Raid, state offline 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 64608 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@949 -- # '[' -z 64608 ']' 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # kill -0 64608 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # uname 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # ps -c -o command 64608 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # tail -1 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:16:13.747 killing process with pid 64608 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # echo 'killing process with pid 64608' 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # kill 64608 00:16:13.747 [2024-06-10 10:20:19.226590] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.747 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # wait 64608 00:16:13.747 [2024-06-10 10:20:19.245472] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.006 10:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:14.006 ************************************ 00:16:14.006 END TEST raid_state_function_test_sb 00:16:14.006 ************************************ 00:16:14.006 00:16:14.006 real 0m26.236s 00:16:14.006 user 0m48.180s 00:16:14.006 sys 0m3.494s 00:16:14.006 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:14.006 10:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.006 10:20:19 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:14.006 10:20:19 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:14.006 10:20:19 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:14.006 10:20:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.006 ************************************ 00:16:14.006 START TEST raid_superblock_test 00:16:14.006 ************************************ 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 4 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=65422 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 65422 /var/tmp/spdk-raid.sock 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@830 -- # '[' -z 65422 ']' 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:14.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:14.006 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.006 [2024-06-10 10:20:19.469964] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:14.006 [2024-06-10 10:20:19.470133] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:14.573 EAL: TSC is not safe to use in SMP mode 00:16:14.573 EAL: TSC is not invariant 00:16:14.573 [2024-06-10 10:20:19.978905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.573 [2024-06-10 10:20:20.056844] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:14.573 [2024-06-10 10:20:20.058891] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.573 [2024-06-10 10:20:20.059578] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.573 [2024-06-10 10:20:20.059591] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@863 -- # return 0 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:15.139 malloc1 00:16:15.139 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.707 [2024-06-10 10:20:21.037935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.707 [2024-06-10 10:20:21.037989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.707 [2024-06-10 10:20:21.037999] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfb780 00:16:15.707 [2024-06-10 10:20:21.038006] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.707 [2024-06-10 10:20:21.038708] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.707 [2024-06-10 10:20:21.038737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.707 pt1 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:15.707 malloc2 00:16:15.707 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.966 [2024-06-10 10:20:21.517957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.966 [2024-06-10 10:20:21.518004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.966 [2024-06-10 10:20:21.518013] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfbc80 00:16:15.966 [2024-06-10 10:20:21.518019] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.966 [2024-06-10 10:20:21.518485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.966 [2024-06-10 10:20:21.518507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.966 pt2 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.966 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:16.224 malloc3 00:16:16.224 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.482 [2024-06-10 10:20:21.933979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.482 [2024-06-10 10:20:21.934025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.482 [2024-06-10 10:20:21.934051] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc180 00:16:16.482 [2024-06-10 10:20:21.934058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.482 [2024-06-10 10:20:21.934506] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.482 [2024-06-10 10:20:21.934539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.482 pt3 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.482 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:16.739 malloc4 00:16:16.739 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.997 [2024-06-10 10:20:22.402016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.997 [2024-06-10 10:20:22.402068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.997 [2024-06-10 10:20:22.402078] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc680 00:16:16.997 [2024-06-10 10:20:22.402086] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.997 [2024-06-10 10:20:22.402549] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.997 [2024-06-10 10:20:22.402576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.997 pt4 00:16:16.997 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:16.997 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:16.997 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:17.255 [2024-06-10 10:20:22.610021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.255 [2024-06-10 10:20:22.610429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.255 [2024-06-10 10:20:22.610449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:17.255 [2024-06-10 10:20:22.610457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:17.255 [2024-06-10 10:20:22.610518] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bcfc900 00:16:17.255 [2024-06-10 10:20:22.610523] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:17.255 [2024-06-10 10:20:22.610551] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd5ee20 00:16:17.255 [2024-06-10 10:20:22.610604] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bcfc900 00:16:17.255 [2024-06-10 10:20:22.610608] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bcfc900 00:16:17.255 [2024-06-10 10:20:22.610625] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.256 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.514 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.514 "name": "raid_bdev1", 00:16:17.514 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:17.514 "strip_size_kb": 0, 00:16:17.514 "state": "online", 00:16:17.514 "raid_level": "raid1", 00:16:17.514 "superblock": true, 00:16:17.514 "num_base_bdevs": 4, 00:16:17.514 "num_base_bdevs_discovered": 4, 00:16:17.514 "num_base_bdevs_operational": 4, 00:16:17.514 "base_bdevs_list": [ 00:16:17.514 { 00:16:17.514 "name": "pt1", 00:16:17.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.514 "is_configured": true, 00:16:17.514 "data_offset": 2048, 00:16:17.514 "data_size": 63488 00:16:17.514 }, 00:16:17.514 { 00:16:17.514 "name": "pt2", 00:16:17.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.514 "is_configured": true, 00:16:17.514 "data_offset": 2048, 00:16:17.514 "data_size": 63488 00:16:17.514 }, 00:16:17.514 { 00:16:17.514 "name": "pt3", 00:16:17.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.514 "is_configured": true, 00:16:17.514 "data_offset": 2048, 00:16:17.514 "data_size": 63488 00:16:17.514 }, 00:16:17.514 { 00:16:17.514 "name": "pt4", 00:16:17.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.514 "is_configured": true, 00:16:17.514 "data_offset": 2048, 00:16:17.514 "data_size": 63488 00:16:17.514 } 00:16:17.514 ] 00:16:17.514 }' 00:16:17.514 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.514 10:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:17.773 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:18.032 [2024-06-10 10:20:23.474096] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.032 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:18.032 "name": "raid_bdev1", 00:16:18.032 "aliases": [ 00:16:18.032 "0bb5cbbf-2713-11ef-b084-113036b5c18d" 00:16:18.032 ], 00:16:18.032 "product_name": "Raid Volume", 00:16:18.032 "block_size": 512, 00:16:18.032 "num_blocks": 63488, 00:16:18.032 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:18.032 "assigned_rate_limits": { 00:16:18.032 "rw_ios_per_sec": 0, 00:16:18.032 "rw_mbytes_per_sec": 0, 00:16:18.032 "r_mbytes_per_sec": 0, 00:16:18.032 "w_mbytes_per_sec": 0 00:16:18.032 }, 00:16:18.032 "claimed": false, 00:16:18.032 "zoned": false, 00:16:18.032 "supported_io_types": { 00:16:18.032 "read": true, 00:16:18.032 "write": true, 00:16:18.032 "unmap": false, 00:16:18.032 "write_zeroes": true, 00:16:18.032 "flush": false, 00:16:18.032 "reset": true, 00:16:18.032 "compare": false, 00:16:18.032 "compare_and_write": false, 00:16:18.032 "abort": false, 00:16:18.032 "nvme_admin": false, 00:16:18.032 "nvme_io": false 00:16:18.032 }, 00:16:18.032 "memory_domains": [ 00:16:18.032 { 00:16:18.032 "dma_device_id": "system", 00:16:18.032 "dma_device_type": 1 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.032 "dma_device_type": 2 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "system", 00:16:18.032 "dma_device_type": 1 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.032 "dma_device_type": 2 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "system", 00:16:18.032 "dma_device_type": 1 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.032 "dma_device_type": 2 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "system", 00:16:18.032 "dma_device_type": 1 00:16:18.032 }, 00:16:18.032 { 00:16:18.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.032 "dma_device_type": 2 00:16:18.032 } 00:16:18.032 ], 00:16:18.032 "driver_specific": { 00:16:18.032 "raid": { 00:16:18.032 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:18.032 "strip_size_kb": 0, 00:16:18.032 "state": "online", 00:16:18.032 "raid_level": "raid1", 00:16:18.032 "superblock": true, 00:16:18.032 "num_base_bdevs": 4, 00:16:18.032 "num_base_bdevs_discovered": 4, 00:16:18.032 "num_base_bdevs_operational": 4, 00:16:18.032 "base_bdevs_list": [ 00:16:18.032 { 00:16:18.032 "name": "pt1", 00:16:18.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.032 "is_configured": true, 00:16:18.032 "data_offset": 2048, 00:16:18.032 "data_size": 63488 00:16:18.032 }, 00:16:18.033 { 00:16:18.033 "name": "pt2", 00:16:18.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.033 "is_configured": true, 00:16:18.033 "data_offset": 2048, 00:16:18.033 "data_size": 63488 00:16:18.033 }, 00:16:18.033 { 00:16:18.033 "name": "pt3", 00:16:18.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.033 "is_configured": true, 00:16:18.033 "data_offset": 2048, 00:16:18.033 "data_size": 63488 00:16:18.033 }, 00:16:18.033 { 00:16:18.033 "name": "pt4", 00:16:18.033 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.033 "is_configured": true, 00:16:18.033 "data_offset": 2048, 00:16:18.033 "data_size": 63488 00:16:18.033 } 00:16:18.033 ] 00:16:18.033 } 00:16:18.033 } 00:16:18.033 }' 00:16:18.033 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.033 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:18.033 pt2 00:16:18.033 pt3 00:16:18.033 pt4' 00:16:18.033 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.033 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.033 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.291 "name": "pt1", 00:16:18.291 "aliases": [ 00:16:18.291 "00000000-0000-0000-0000-000000000001" 00:16:18.291 ], 00:16:18.291 "product_name": "passthru", 00:16:18.291 "block_size": 512, 00:16:18.291 "num_blocks": 65536, 00:16:18.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.291 "assigned_rate_limits": { 00:16:18.291 "rw_ios_per_sec": 0, 00:16:18.291 "rw_mbytes_per_sec": 0, 00:16:18.291 "r_mbytes_per_sec": 0, 00:16:18.291 "w_mbytes_per_sec": 0 00:16:18.291 }, 00:16:18.291 "claimed": true, 00:16:18.291 "claim_type": "exclusive_write", 00:16:18.291 "zoned": false, 00:16:18.291 "supported_io_types": { 00:16:18.291 "read": true, 00:16:18.291 "write": true, 00:16:18.291 "unmap": true, 00:16:18.291 "write_zeroes": true, 00:16:18.291 "flush": true, 00:16:18.291 "reset": true, 00:16:18.291 "compare": false, 00:16:18.291 "compare_and_write": false, 00:16:18.291 "abort": true, 00:16:18.291 "nvme_admin": false, 00:16:18.291 "nvme_io": false 00:16:18.291 }, 00:16:18.291 "memory_domains": [ 00:16:18.291 { 00:16:18.291 "dma_device_id": "system", 00:16:18.291 "dma_device_type": 1 00:16:18.291 }, 00:16:18.291 { 00:16:18.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.291 "dma_device_type": 2 00:16:18.291 } 00:16:18.291 ], 00:16:18.291 "driver_specific": { 00:16:18.291 "passthru": { 00:16:18.291 "name": "pt1", 00:16:18.291 "base_bdev_name": "malloc1" 00:16:18.291 } 00:16:18.291 } 00:16:18.291 }' 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:18.291 10:20:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.550 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.550 "name": "pt2", 00:16:18.550 "aliases": [ 00:16:18.550 "00000000-0000-0000-0000-000000000002" 00:16:18.550 ], 00:16:18.550 "product_name": "passthru", 00:16:18.550 "block_size": 512, 00:16:18.550 "num_blocks": 65536, 00:16:18.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.550 "assigned_rate_limits": { 00:16:18.550 "rw_ios_per_sec": 0, 00:16:18.550 "rw_mbytes_per_sec": 0, 00:16:18.550 "r_mbytes_per_sec": 0, 00:16:18.550 "w_mbytes_per_sec": 0 00:16:18.550 }, 00:16:18.550 "claimed": true, 00:16:18.550 "claim_type": "exclusive_write", 00:16:18.550 "zoned": false, 00:16:18.550 "supported_io_types": { 00:16:18.550 "read": true, 00:16:18.550 "write": true, 00:16:18.550 "unmap": true, 00:16:18.550 "write_zeroes": true, 00:16:18.550 "flush": true, 00:16:18.550 "reset": true, 00:16:18.550 "compare": false, 00:16:18.550 "compare_and_write": false, 00:16:18.550 "abort": true, 00:16:18.550 "nvme_admin": false, 00:16:18.550 "nvme_io": false 00:16:18.550 }, 00:16:18.550 "memory_domains": [ 00:16:18.550 { 00:16:18.550 "dma_device_id": "system", 00:16:18.550 "dma_device_type": 1 00:16:18.550 }, 00:16:18.550 { 00:16:18.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.550 "dma_device_type": 2 00:16:18.550 } 00:16:18.550 ], 00:16:18.550 "driver_specific": { 00:16:18.550 "passthru": { 00:16:18.550 "name": "pt2", 00:16:18.550 "base_bdev_name": "malloc2" 00:16:18.550 } 00:16:18.550 } 00:16:18.550 }' 00:16:18.550 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.550 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.808 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.808 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.808 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.808 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.809 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:19.067 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:19.067 "name": "pt3", 00:16:19.067 "aliases": [ 00:16:19.067 "00000000-0000-0000-0000-000000000003" 00:16:19.067 ], 00:16:19.067 "product_name": "passthru", 00:16:19.067 "block_size": 512, 00:16:19.067 "num_blocks": 65536, 00:16:19.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.067 "assigned_rate_limits": { 00:16:19.067 "rw_ios_per_sec": 0, 00:16:19.067 "rw_mbytes_per_sec": 0, 00:16:19.067 "r_mbytes_per_sec": 0, 00:16:19.067 "w_mbytes_per_sec": 0 00:16:19.067 }, 00:16:19.067 "claimed": true, 00:16:19.067 "claim_type": "exclusive_write", 00:16:19.067 "zoned": false, 00:16:19.067 "supported_io_types": { 00:16:19.067 "read": true, 00:16:19.067 "write": true, 00:16:19.067 "unmap": true, 00:16:19.067 "write_zeroes": true, 00:16:19.067 "flush": true, 00:16:19.067 "reset": true, 00:16:19.067 "compare": false, 00:16:19.067 "compare_and_write": false, 00:16:19.067 "abort": true, 00:16:19.067 "nvme_admin": false, 00:16:19.067 "nvme_io": false 00:16:19.067 }, 00:16:19.067 "memory_domains": [ 00:16:19.067 { 00:16:19.067 "dma_device_id": "system", 00:16:19.067 "dma_device_type": 1 00:16:19.067 }, 00:16:19.067 { 00:16:19.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.067 "dma_device_type": 2 00:16:19.067 } 00:16:19.067 ], 00:16:19.067 "driver_specific": { 00:16:19.067 "passthru": { 00:16:19.067 "name": "pt3", 00:16:19.067 "base_bdev_name": "malloc3" 00:16:19.067 } 00:16:19.067 } 00:16:19.067 }' 00:16:19.067 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.067 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.067 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:19.067 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:19.068 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:19.326 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:19.326 "name": "pt4", 00:16:19.326 "aliases": [ 00:16:19.326 "00000000-0000-0000-0000-000000000004" 00:16:19.326 ], 00:16:19.327 "product_name": "passthru", 00:16:19.327 "block_size": 512, 00:16:19.327 "num_blocks": 65536, 00:16:19.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.327 "assigned_rate_limits": { 00:16:19.327 "rw_ios_per_sec": 0, 00:16:19.327 "rw_mbytes_per_sec": 0, 00:16:19.327 "r_mbytes_per_sec": 0, 00:16:19.327 "w_mbytes_per_sec": 0 00:16:19.327 }, 00:16:19.327 "claimed": true, 00:16:19.327 "claim_type": "exclusive_write", 00:16:19.327 "zoned": false, 00:16:19.327 "supported_io_types": { 00:16:19.327 "read": true, 00:16:19.327 "write": true, 00:16:19.327 "unmap": true, 00:16:19.327 "write_zeroes": true, 00:16:19.327 "flush": true, 00:16:19.327 "reset": true, 00:16:19.327 "compare": false, 00:16:19.327 "compare_and_write": false, 00:16:19.327 "abort": true, 00:16:19.327 "nvme_admin": false, 00:16:19.327 "nvme_io": false 00:16:19.327 }, 00:16:19.327 "memory_domains": [ 00:16:19.327 { 00:16:19.327 "dma_device_id": "system", 00:16:19.327 "dma_device_type": 1 00:16:19.327 }, 00:16:19.327 { 00:16:19.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.327 "dma_device_type": 2 00:16:19.327 } 00:16:19.327 ], 00:16:19.327 "driver_specific": { 00:16:19.327 "passthru": { 00:16:19.327 "name": "pt4", 00:16:19.327 "base_bdev_name": "malloc4" 00:16:19.327 } 00:16:19.327 } 00:16:19.327 }' 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.327 10:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:19.586 [2024-06-10 10:20:25.158153] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.586 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0bb5cbbf-2713-11ef-b084-113036b5c18d 00:16:19.586 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0bb5cbbf-2713-11ef-b084-113036b5c18d ']' 00:16:19.586 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.844 [2024-06-10 10:20:25.406135] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.844 [2024-06-10 10:20:25.406152] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.844 [2024-06-10 10:20:25.406172] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.844 [2024-06-10 10:20:25.406188] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.844 [2024-06-10 10:20:25.406192] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcfc900 name raid_bdev1, state offline 00:16:19.844 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.844 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:20.103 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:20.103 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:20.103 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.103 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:20.370 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.370 10:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:20.629 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.629 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:20.887 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.887 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:21.145 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:21.145 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@649 -- # local es=0 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:21.404 10:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:21.663 [2024-06-10 10:20:27.106211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:21.663 [2024-06-10 10:20:27.106671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:21.663 [2024-06-10 10:20:27.106689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:21.663 [2024-06-10 10:20:27.106696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:21.663 [2024-06-10 10:20:27.106708] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:21.663 [2024-06-10 10:20:27.106743] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:21.663 [2024-06-10 10:20:27.106753] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:21.663 [2024-06-10 10:20:27.106760] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:21.663 [2024-06-10 10:20:27.106768] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.663 [2024-06-10 10:20:27.106772] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcfc680 name raid_bdev1, state configuring 00:16:21.663 request: 00:16:21.663 { 00:16:21.663 "name": "raid_bdev1", 00:16:21.663 "raid_level": "raid1", 00:16:21.663 "base_bdevs": [ 00:16:21.663 "malloc1", 00:16:21.663 "malloc2", 00:16:21.663 "malloc3", 00:16:21.663 "malloc4" 00:16:21.663 ], 00:16:21.663 "superblock": false, 00:16:21.663 "method": "bdev_raid_create", 00:16:21.663 "req_id": 1 00:16:21.663 } 00:16:21.663 Got JSON-RPC error response 00:16:21.663 response: 00:16:21.663 { 00:16:21.663 "code": -17, 00:16:21.663 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:21.663 } 00:16:21.663 10:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # es=1 00:16:21.663 10:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:21.663 10:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:21.663 10:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:21.663 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:21.663 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.922 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:21.922 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:21.922 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.922 [2024-06-10 10:20:27.526231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.922 [2024-06-10 10:20:27.526275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.922 [2024-06-10 10:20:27.526284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc180 00:16:21.922 [2024-06-10 10:20:27.526290] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.922 [2024-06-10 10:20:27.526823] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.922 [2024-06-10 10:20:27.526846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.922 [2024-06-10 10:20:27.526864] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.922 [2024-06-10 10:20:27.526873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.181 pt1 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.181 "name": "raid_bdev1", 00:16:22.181 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:22.181 "strip_size_kb": 0, 00:16:22.181 "state": "configuring", 00:16:22.181 "raid_level": "raid1", 00:16:22.181 "superblock": true, 00:16:22.181 "num_base_bdevs": 4, 00:16:22.181 "num_base_bdevs_discovered": 1, 00:16:22.181 "num_base_bdevs_operational": 4, 00:16:22.181 "base_bdevs_list": [ 00:16:22.181 { 00:16:22.181 "name": "pt1", 00:16:22.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.181 "is_configured": true, 00:16:22.181 "data_offset": 2048, 00:16:22.181 "data_size": 63488 00:16:22.181 }, 00:16:22.181 { 00:16:22.181 "name": null, 00:16:22.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.181 "is_configured": false, 00:16:22.181 "data_offset": 2048, 00:16:22.181 "data_size": 63488 00:16:22.181 }, 00:16:22.181 { 00:16:22.181 "name": null, 00:16:22.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.181 "is_configured": false, 00:16:22.181 "data_offset": 2048, 00:16:22.181 "data_size": 63488 00:16:22.181 }, 00:16:22.181 { 00:16:22.181 "name": null, 00:16:22.181 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.181 "is_configured": false, 00:16:22.181 "data_offset": 2048, 00:16:22.181 "data_size": 63488 00:16:22.181 } 00:16:22.181 ] 00:16:22.181 }' 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.181 10:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.746 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:22.746 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.746 [2024-06-10 10:20:28.310276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.746 [2024-06-10 10:20:28.310366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.746 [2024-06-10 10:20:28.310378] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfb780 00:16:22.746 [2024-06-10 10:20:28.310385] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.746 [2024-06-10 10:20:28.310491] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.746 [2024-06-10 10:20:28.310502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.746 [2024-06-10 10:20:28.310524] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.746 [2024-06-10 10:20:28.310532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.746 pt2 00:16:22.746 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:23.003 [2024-06-10 10:20:28.506300] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.003 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.379 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.379 "name": "raid_bdev1", 00:16:23.379 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:23.379 "strip_size_kb": 0, 00:16:23.379 "state": "configuring", 00:16:23.379 "raid_level": "raid1", 00:16:23.379 "superblock": true, 00:16:23.379 "num_base_bdevs": 4, 00:16:23.379 "num_base_bdevs_discovered": 1, 00:16:23.379 "num_base_bdevs_operational": 4, 00:16:23.379 "base_bdevs_list": [ 00:16:23.379 { 00:16:23.379 "name": "pt1", 00:16:23.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.379 "is_configured": true, 00:16:23.379 "data_offset": 2048, 00:16:23.379 "data_size": 63488 00:16:23.379 }, 00:16:23.379 { 00:16:23.379 "name": null, 00:16:23.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.379 "is_configured": false, 00:16:23.379 "data_offset": 2048, 00:16:23.379 "data_size": 63488 00:16:23.379 }, 00:16:23.379 { 00:16:23.379 "name": null, 00:16:23.379 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.379 "is_configured": false, 00:16:23.379 "data_offset": 2048, 00:16:23.379 "data_size": 63488 00:16:23.379 }, 00:16:23.379 { 00:16:23.379 "name": null, 00:16:23.379 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.379 "is_configured": false, 00:16:23.379 "data_offset": 2048, 00:16:23.379 "data_size": 63488 00:16:23.379 } 00:16:23.379 ] 00:16:23.379 }' 00:16:23.379 10:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.379 10:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.638 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:23.638 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:23.638 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.896 [2024-06-10 10:20:29.334333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.897 [2024-06-10 10:20:29.334389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.897 [2024-06-10 10:20:29.334399] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfb780 00:16:23.897 [2024-06-10 10:20:29.334406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.897 [2024-06-10 10:20:29.334518] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.897 [2024-06-10 10:20:29.334527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.897 [2024-06-10 10:20:29.334546] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.897 [2024-06-10 10:20:29.334554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.897 pt2 00:16:23.897 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:23.897 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:23.897 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.154 [2024-06-10 10:20:29.634363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.154 [2024-06-10 10:20:29.634422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.154 [2024-06-10 10:20:29.634433] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfcb80 00:16:24.154 [2024-06-10 10:20:29.634441] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.154 [2024-06-10 10:20:29.634540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.154 [2024-06-10 10:20:29.634549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.154 [2024-06-10 10:20:29.634571] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:24.154 [2024-06-10 10:20:29.634579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.154 pt3 00:16:24.154 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:24.154 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:24.154 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:24.414 [2024-06-10 10:20:29.922392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:24.414 [2024-06-10 10:20:29.922454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.414 [2024-06-10 10:20:29.922466] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc900 00:16:24.414 [2024-06-10 10:20:29.922474] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.414 [2024-06-10 10:20:29.922571] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.414 [2024-06-10 10:20:29.922579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:24.414 [2024-06-10 10:20:29.922599] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:24.414 [2024-06-10 10:20:29.922607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:24.414 [2024-06-10 10:20:29.922635] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bcfbc80 00:16:24.414 [2024-06-10 10:20:29.922639] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.414 [2024-06-10 10:20:29.922658] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd5ee20 00:16:24.414 [2024-06-10 10:20:29.922701] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bcfbc80 00:16:24.414 [2024-06-10 10:20:29.922705] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bcfbc80 00:16:24.414 [2024-06-10 10:20:29.922722] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.414 pt4 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.414 10:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.674 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.674 "name": "raid_bdev1", 00:16:24.674 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:24.674 "strip_size_kb": 0, 00:16:24.674 "state": "online", 00:16:24.674 "raid_level": "raid1", 00:16:24.674 "superblock": true, 00:16:24.674 "num_base_bdevs": 4, 00:16:24.674 "num_base_bdevs_discovered": 4, 00:16:24.674 "num_base_bdevs_operational": 4, 00:16:24.674 "base_bdevs_list": [ 00:16:24.674 { 00:16:24.674 "name": "pt1", 00:16:24.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.674 "is_configured": true, 00:16:24.674 "data_offset": 2048, 00:16:24.674 "data_size": 63488 00:16:24.674 }, 00:16:24.674 { 00:16:24.674 "name": "pt2", 00:16:24.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.674 "is_configured": true, 00:16:24.674 "data_offset": 2048, 00:16:24.674 "data_size": 63488 00:16:24.674 }, 00:16:24.674 { 00:16:24.674 "name": "pt3", 00:16:24.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.674 "is_configured": true, 00:16:24.674 "data_offset": 2048, 00:16:24.674 "data_size": 63488 00:16:24.674 }, 00:16:24.674 { 00:16:24.674 "name": "pt4", 00:16:24.674 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.674 "is_configured": true, 00:16:24.674 "data_offset": 2048, 00:16:24.674 "data_size": 63488 00:16:24.674 } 00:16:24.674 ] 00:16:24.674 }' 00:16:24.674 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.674 10:20:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:25.241 [2024-06-10 10:20:30.786450] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:25.241 "name": "raid_bdev1", 00:16:25.241 "aliases": [ 00:16:25.241 "0bb5cbbf-2713-11ef-b084-113036b5c18d" 00:16:25.241 ], 00:16:25.241 "product_name": "Raid Volume", 00:16:25.241 "block_size": 512, 00:16:25.241 "num_blocks": 63488, 00:16:25.241 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:25.241 "assigned_rate_limits": { 00:16:25.241 "rw_ios_per_sec": 0, 00:16:25.241 "rw_mbytes_per_sec": 0, 00:16:25.241 "r_mbytes_per_sec": 0, 00:16:25.241 "w_mbytes_per_sec": 0 00:16:25.241 }, 00:16:25.241 "claimed": false, 00:16:25.241 "zoned": false, 00:16:25.241 "supported_io_types": { 00:16:25.241 "read": true, 00:16:25.241 "write": true, 00:16:25.241 "unmap": false, 00:16:25.241 "write_zeroes": true, 00:16:25.241 "flush": false, 00:16:25.241 "reset": true, 00:16:25.241 "compare": false, 00:16:25.241 "compare_and_write": false, 00:16:25.241 "abort": false, 00:16:25.241 "nvme_admin": false, 00:16:25.241 "nvme_io": false 00:16:25.241 }, 00:16:25.241 "memory_domains": [ 00:16:25.241 { 00:16:25.241 "dma_device_id": "system", 00:16:25.241 "dma_device_type": 1 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.241 "dma_device_type": 2 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "system", 00:16:25.241 "dma_device_type": 1 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.241 "dma_device_type": 2 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "system", 00:16:25.241 "dma_device_type": 1 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.241 "dma_device_type": 2 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "system", 00:16:25.241 "dma_device_type": 1 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.241 "dma_device_type": 2 00:16:25.241 } 00:16:25.241 ], 00:16:25.241 "driver_specific": { 00:16:25.241 "raid": { 00:16:25.241 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:25.241 "strip_size_kb": 0, 00:16:25.241 "state": "online", 00:16:25.241 "raid_level": "raid1", 00:16:25.241 "superblock": true, 00:16:25.241 "num_base_bdevs": 4, 00:16:25.241 "num_base_bdevs_discovered": 4, 00:16:25.241 "num_base_bdevs_operational": 4, 00:16:25.241 "base_bdevs_list": [ 00:16:25.241 { 00:16:25.241 "name": "pt1", 00:16:25.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.241 "is_configured": true, 00:16:25.241 "data_offset": 2048, 00:16:25.241 "data_size": 63488 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "name": "pt2", 00:16:25.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.241 "is_configured": true, 00:16:25.241 "data_offset": 2048, 00:16:25.241 "data_size": 63488 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "name": "pt3", 00:16:25.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.241 "is_configured": true, 00:16:25.241 "data_offset": 2048, 00:16:25.241 "data_size": 63488 00:16:25.241 }, 00:16:25.241 { 00:16:25.241 "name": "pt4", 00:16:25.241 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.241 "is_configured": true, 00:16:25.241 "data_offset": 2048, 00:16:25.241 "data_size": 63488 00:16:25.241 } 00:16:25.241 ] 00:16:25.241 } 00:16:25.241 } 00:16:25.241 }' 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:25.241 pt2 00:16:25.241 pt3 00:16:25.241 pt4' 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:25.241 10:20:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.499 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.499 "name": "pt1", 00:16:25.499 "aliases": [ 00:16:25.499 "00000000-0000-0000-0000-000000000001" 00:16:25.499 ], 00:16:25.499 "product_name": "passthru", 00:16:25.499 "block_size": 512, 00:16:25.499 "num_blocks": 65536, 00:16:25.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.499 "assigned_rate_limits": { 00:16:25.499 "rw_ios_per_sec": 0, 00:16:25.499 "rw_mbytes_per_sec": 0, 00:16:25.499 "r_mbytes_per_sec": 0, 00:16:25.499 "w_mbytes_per_sec": 0 00:16:25.499 }, 00:16:25.499 "claimed": true, 00:16:25.499 "claim_type": "exclusive_write", 00:16:25.499 "zoned": false, 00:16:25.499 "supported_io_types": { 00:16:25.499 "read": true, 00:16:25.499 "write": true, 00:16:25.499 "unmap": true, 00:16:25.499 "write_zeroes": true, 00:16:25.499 "flush": true, 00:16:25.499 "reset": true, 00:16:25.499 "compare": false, 00:16:25.499 "compare_and_write": false, 00:16:25.499 "abort": true, 00:16:25.499 "nvme_admin": false, 00:16:25.499 "nvme_io": false 00:16:25.499 }, 00:16:25.499 "memory_domains": [ 00:16:25.499 { 00:16:25.499 "dma_device_id": "system", 00:16:25.499 "dma_device_type": 1 00:16:25.499 }, 00:16:25.499 { 00:16:25.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.499 "dma_device_type": 2 00:16:25.499 } 00:16:25.499 ], 00:16:25.499 "driver_specific": { 00:16:25.499 "passthru": { 00:16:25.499 "name": "pt1", 00:16:25.499 "base_bdev_name": "malloc1" 00:16:25.499 } 00:16:25.499 } 00:16:25.499 }' 00:16:25.499 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:25.767 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.063 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.063 "name": "pt2", 00:16:26.063 "aliases": [ 00:16:26.063 "00000000-0000-0000-0000-000000000002" 00:16:26.063 ], 00:16:26.063 "product_name": "passthru", 00:16:26.063 "block_size": 512, 00:16:26.063 "num_blocks": 65536, 00:16:26.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.063 "assigned_rate_limits": { 00:16:26.063 "rw_ios_per_sec": 0, 00:16:26.063 "rw_mbytes_per_sec": 0, 00:16:26.063 "r_mbytes_per_sec": 0, 00:16:26.063 "w_mbytes_per_sec": 0 00:16:26.063 }, 00:16:26.063 "claimed": true, 00:16:26.063 "claim_type": "exclusive_write", 00:16:26.063 "zoned": false, 00:16:26.063 "supported_io_types": { 00:16:26.063 "read": true, 00:16:26.064 "write": true, 00:16:26.064 "unmap": true, 00:16:26.064 "write_zeroes": true, 00:16:26.064 "flush": true, 00:16:26.064 "reset": true, 00:16:26.064 "compare": false, 00:16:26.064 "compare_and_write": false, 00:16:26.064 "abort": true, 00:16:26.064 "nvme_admin": false, 00:16:26.064 "nvme_io": false 00:16:26.064 }, 00:16:26.064 "memory_domains": [ 00:16:26.064 { 00:16:26.064 "dma_device_id": "system", 00:16:26.064 "dma_device_type": 1 00:16:26.064 }, 00:16:26.064 { 00:16:26.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.064 "dma_device_type": 2 00:16:26.064 } 00:16:26.064 ], 00:16:26.064 "driver_specific": { 00:16:26.064 "passthru": { 00:16:26.064 "name": "pt2", 00:16:26.064 "base_bdev_name": "malloc2" 00:16:26.064 } 00:16:26.064 } 00:16:26.064 }' 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.064 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:26.322 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.322 "name": "pt3", 00:16:26.322 "aliases": [ 00:16:26.322 "00000000-0000-0000-0000-000000000003" 00:16:26.322 ], 00:16:26.322 "product_name": "passthru", 00:16:26.322 "block_size": 512, 00:16:26.322 "num_blocks": 65536, 00:16:26.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.322 "assigned_rate_limits": { 00:16:26.322 "rw_ios_per_sec": 0, 00:16:26.322 "rw_mbytes_per_sec": 0, 00:16:26.322 "r_mbytes_per_sec": 0, 00:16:26.323 "w_mbytes_per_sec": 0 00:16:26.323 }, 00:16:26.323 "claimed": true, 00:16:26.323 "claim_type": "exclusive_write", 00:16:26.323 "zoned": false, 00:16:26.323 "supported_io_types": { 00:16:26.323 "read": true, 00:16:26.323 "write": true, 00:16:26.323 "unmap": true, 00:16:26.323 "write_zeroes": true, 00:16:26.323 "flush": true, 00:16:26.323 "reset": true, 00:16:26.323 "compare": false, 00:16:26.323 "compare_and_write": false, 00:16:26.323 "abort": true, 00:16:26.323 "nvme_admin": false, 00:16:26.323 "nvme_io": false 00:16:26.323 }, 00:16:26.323 "memory_domains": [ 00:16:26.323 { 00:16:26.323 "dma_device_id": "system", 00:16:26.323 "dma_device_type": 1 00:16:26.323 }, 00:16:26.323 { 00:16:26.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.323 "dma_device_type": 2 00:16:26.323 } 00:16:26.323 ], 00:16:26.323 "driver_specific": { 00:16:26.323 "passthru": { 00:16:26.323 "name": "pt3", 00:16:26.323 "base_bdev_name": "malloc3" 00:16:26.323 } 00:16:26.323 } 00:16:26.323 }' 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:26.323 10:20:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:26.582 "name": "pt4", 00:16:26.582 "aliases": [ 00:16:26.582 "00000000-0000-0000-0000-000000000004" 00:16:26.582 ], 00:16:26.582 "product_name": "passthru", 00:16:26.582 "block_size": 512, 00:16:26.582 "num_blocks": 65536, 00:16:26.582 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.582 "assigned_rate_limits": { 00:16:26.582 "rw_ios_per_sec": 0, 00:16:26.582 "rw_mbytes_per_sec": 0, 00:16:26.582 "r_mbytes_per_sec": 0, 00:16:26.582 "w_mbytes_per_sec": 0 00:16:26.582 }, 00:16:26.582 "claimed": true, 00:16:26.582 "claim_type": "exclusive_write", 00:16:26.582 "zoned": false, 00:16:26.582 "supported_io_types": { 00:16:26.582 "read": true, 00:16:26.582 "write": true, 00:16:26.582 "unmap": true, 00:16:26.582 "write_zeroes": true, 00:16:26.582 "flush": true, 00:16:26.582 "reset": true, 00:16:26.582 "compare": false, 00:16:26.582 "compare_and_write": false, 00:16:26.582 "abort": true, 00:16:26.582 "nvme_admin": false, 00:16:26.582 "nvme_io": false 00:16:26.582 }, 00:16:26.582 "memory_domains": [ 00:16:26.582 { 00:16:26.582 "dma_device_id": "system", 00:16:26.582 "dma_device_type": 1 00:16:26.582 }, 00:16:26.582 { 00:16:26.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.582 "dma_device_type": 2 00:16:26.582 } 00:16:26.582 ], 00:16:26.582 "driver_specific": { 00:16:26.582 "passthru": { 00:16:26.582 "name": "pt4", 00:16:26.582 "base_bdev_name": "malloc4" 00:16:26.582 } 00:16:26.582 } 00:16:26.582 }' 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:26.582 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:26.840 [2024-06-10 10:20:32.434550] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0bb5cbbf-2713-11ef-b084-113036b5c18d '!=' 0bb5cbbf-2713-11ef-b084-113036b5c18d ']' 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:27.098 [2024-06-10 10:20:32.666524] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.098 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.356 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.356 "name": "raid_bdev1", 00:16:27.356 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:27.356 "strip_size_kb": 0, 00:16:27.356 "state": "online", 00:16:27.356 "raid_level": "raid1", 00:16:27.356 "superblock": true, 00:16:27.356 "num_base_bdevs": 4, 00:16:27.356 "num_base_bdevs_discovered": 3, 00:16:27.356 "num_base_bdevs_operational": 3, 00:16:27.356 "base_bdevs_list": [ 00:16:27.356 { 00:16:27.356 "name": null, 00:16:27.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.356 "is_configured": false, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "name": "pt2", 00:16:27.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "name": "pt3", 00:16:27.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 }, 00:16:27.356 { 00:16:27.356 "name": "pt4", 00:16:27.356 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.356 "is_configured": true, 00:16:27.356 "data_offset": 2048, 00:16:27.356 "data_size": 63488 00:16:27.356 } 00:16:27.356 ] 00:16:27.356 }' 00:16:27.356 10:20:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.356 10:20:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.923 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:27.923 [2024-06-10 10:20:33.434549] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.923 [2024-06-10 10:20:33.434577] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.923 [2024-06-10 10:20:33.434598] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.923 [2024-06-10 10:20:33.434615] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.923 [2024-06-10 10:20:33.434619] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcfbc80 name raid_bdev1, state offline 00:16:27.923 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.923 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:28.180 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:28.180 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:28.180 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:28.180 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:28.180 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:28.438 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:28.438 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:28.438 10:20:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:28.696 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:28.696 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:28.696 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:28.954 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:28.954 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:28.954 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:28.954 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:28.954 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:29.213 [2024-06-10 10:20:34.718587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:29.213 [2024-06-10 10:20:34.718643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.213 [2024-06-10 10:20:34.718653] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc900 00:16:29.213 [2024-06-10 10:20:34.718660] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.213 [2024-06-10 10:20:34.719181] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.213 [2024-06-10 10:20:34.719205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:29.213 [2024-06-10 10:20:34.719227] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:29.213 [2024-06-10 10:20:34.719238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.213 pt2 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.213 10:20:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.472 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.472 "name": "raid_bdev1", 00:16:29.472 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:29.472 "strip_size_kb": 0, 00:16:29.472 "state": "configuring", 00:16:29.472 "raid_level": "raid1", 00:16:29.472 "superblock": true, 00:16:29.472 "num_base_bdevs": 4, 00:16:29.472 "num_base_bdevs_discovered": 1, 00:16:29.472 "num_base_bdevs_operational": 3, 00:16:29.472 "base_bdevs_list": [ 00:16:29.472 { 00:16:29.472 "name": null, 00:16:29.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.472 "is_configured": false, 00:16:29.472 "data_offset": 2048, 00:16:29.472 "data_size": 63488 00:16:29.472 }, 00:16:29.472 { 00:16:29.472 "name": "pt2", 00:16:29.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.472 "is_configured": true, 00:16:29.472 "data_offset": 2048, 00:16:29.472 "data_size": 63488 00:16:29.472 }, 00:16:29.472 { 00:16:29.472 "name": null, 00:16:29.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.472 "is_configured": false, 00:16:29.472 "data_offset": 2048, 00:16:29.472 "data_size": 63488 00:16:29.472 }, 00:16:29.472 { 00:16:29.472 "name": null, 00:16:29.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.472 "is_configured": false, 00:16:29.472 "data_offset": 2048, 00:16:29.472 "data_size": 63488 00:16:29.472 } 00:16:29.472 ] 00:16:29.472 }' 00:16:29.472 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.472 10:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.731 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:29.731 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:29.731 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:29.989 [2024-06-10 10:20:35.486645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:29.989 [2024-06-10 10:20:35.486715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.989 [2024-06-10 10:20:35.486725] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc680 00:16:29.989 [2024-06-10 10:20:35.486732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.989 [2024-06-10 10:20:35.486810] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.989 [2024-06-10 10:20:35.486818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:29.989 [2024-06-10 10:20:35.486834] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:29.989 [2024-06-10 10:20:35.486858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.989 pt3 00:16:29.989 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:29.989 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.989 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.989 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.989 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.990 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.247 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.247 "name": "raid_bdev1", 00:16:30.247 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:30.247 "strip_size_kb": 0, 00:16:30.247 "state": "configuring", 00:16:30.247 "raid_level": "raid1", 00:16:30.247 "superblock": true, 00:16:30.247 "num_base_bdevs": 4, 00:16:30.247 "num_base_bdevs_discovered": 2, 00:16:30.247 "num_base_bdevs_operational": 3, 00:16:30.247 "base_bdevs_list": [ 00:16:30.247 { 00:16:30.247 "name": null, 00:16:30.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.247 "is_configured": false, 00:16:30.247 "data_offset": 2048, 00:16:30.247 "data_size": 63488 00:16:30.247 }, 00:16:30.247 { 00:16:30.247 "name": "pt2", 00:16:30.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.247 "is_configured": true, 00:16:30.247 "data_offset": 2048, 00:16:30.247 "data_size": 63488 00:16:30.247 }, 00:16:30.247 { 00:16:30.247 "name": "pt3", 00:16:30.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.247 "is_configured": true, 00:16:30.247 "data_offset": 2048, 00:16:30.247 "data_size": 63488 00:16:30.247 }, 00:16:30.247 { 00:16:30.247 "name": null, 00:16:30.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.247 "is_configured": false, 00:16:30.247 "data_offset": 2048, 00:16:30.247 "data_size": 63488 00:16:30.247 } 00:16:30.247 ] 00:16:30.247 }' 00:16:30.247 10:20:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.247 10:20:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:30.812 [2024-06-10 10:20:36.370680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:30.812 [2024-06-10 10:20:36.370742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.812 [2024-06-10 10:20:36.370752] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfbc80 00:16:30.812 [2024-06-10 10:20:36.370758] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.812 [2024-06-10 10:20:36.370831] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.812 [2024-06-10 10:20:36.370839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:30.812 [2024-06-10 10:20:36.370854] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:30.812 [2024-06-10 10:20:36.370861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:30.812 [2024-06-10 10:20:36.370882] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bcfb780 00:16:30.812 [2024-06-10 10:20:36.370885] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.812 [2024-06-10 10:20:36.370902] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd5ee20 00:16:30.812 [2024-06-10 10:20:36.370933] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bcfb780 00:16:30.812 [2024-06-10 10:20:36.370936] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bcfb780 00:16:30.812 [2024-06-10 10:20:36.370952] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.812 pt4 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.812 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.070 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.070 "name": "raid_bdev1", 00:16:31.070 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:31.070 "strip_size_kb": 0, 00:16:31.070 "state": "online", 00:16:31.070 "raid_level": "raid1", 00:16:31.070 "superblock": true, 00:16:31.070 "num_base_bdevs": 4, 00:16:31.070 "num_base_bdevs_discovered": 3, 00:16:31.070 "num_base_bdevs_operational": 3, 00:16:31.070 "base_bdevs_list": [ 00:16:31.070 { 00:16:31.070 "name": null, 00:16:31.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.070 "is_configured": false, 00:16:31.070 "data_offset": 2048, 00:16:31.070 "data_size": 63488 00:16:31.070 }, 00:16:31.070 { 00:16:31.070 "name": "pt2", 00:16:31.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.070 "is_configured": true, 00:16:31.070 "data_offset": 2048, 00:16:31.070 "data_size": 63488 00:16:31.070 }, 00:16:31.070 { 00:16:31.070 "name": "pt3", 00:16:31.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.070 "is_configured": true, 00:16:31.070 "data_offset": 2048, 00:16:31.070 "data_size": 63488 00:16:31.070 }, 00:16:31.070 { 00:16:31.070 "name": "pt4", 00:16:31.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.070 "is_configured": true, 00:16:31.070 "data_offset": 2048, 00:16:31.070 "data_size": 63488 00:16:31.070 } 00:16:31.070 ] 00:16:31.070 }' 00:16:31.070 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.070 10:20:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.634 10:20:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:31.893 [2024-06-10 10:20:37.270712] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.893 [2024-06-10 10:20:37.270735] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.893 [2024-06-10 10:20:37.270751] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.893 [2024-06-10 10:20:37.270764] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.893 [2024-06-10 10:20:37.270767] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcfb780 name raid_bdev1, state offline 00:16:31.893 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.893 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:32.151 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:32.151 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:32.151 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:16:32.151 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:16:32.151 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.409 [2024-06-10 10:20:37.974737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.409 [2024-06-10 10:20:37.974781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.409 [2024-06-10 10:20:37.974790] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfbc80 00:16:32.409 [2024-06-10 10:20:37.974796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.409 [2024-06-10 10:20:37.975297] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.409 [2024-06-10 10:20:37.975324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.409 [2024-06-10 10:20:37.975342] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.409 [2024-06-10 10:20:37.975351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.409 [2024-06-10 10:20:37.975387] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:32.409 [2024-06-10 10:20:37.975391] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.409 [2024-06-10 10:20:37.975396] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcfb780 name raid_bdev1, state configuring 00:16:32.409 [2024-06-10 10:20:37.975402] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.409 [2024-06-10 10:20:37.975416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:32.409 pt1 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.409 10:20:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.668 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.668 "name": "raid_bdev1", 00:16:32.668 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:32.668 "strip_size_kb": 0, 00:16:32.668 "state": "configuring", 00:16:32.668 "raid_level": "raid1", 00:16:32.668 "superblock": true, 00:16:32.668 "num_base_bdevs": 4, 00:16:32.668 "num_base_bdevs_discovered": 2, 00:16:32.668 "num_base_bdevs_operational": 3, 00:16:32.668 "base_bdevs_list": [ 00:16:32.668 { 00:16:32.668 "name": null, 00:16:32.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.668 "is_configured": false, 00:16:32.668 "data_offset": 2048, 00:16:32.668 "data_size": 63488 00:16:32.668 }, 00:16:32.668 { 00:16:32.668 "name": "pt2", 00:16:32.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.668 "is_configured": true, 00:16:32.668 "data_offset": 2048, 00:16:32.668 "data_size": 63488 00:16:32.668 }, 00:16:32.668 { 00:16:32.668 "name": "pt3", 00:16:32.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.668 "is_configured": true, 00:16:32.668 "data_offset": 2048, 00:16:32.668 "data_size": 63488 00:16:32.668 }, 00:16:32.668 { 00:16:32.668 "name": null, 00:16:32.668 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.668 "is_configured": false, 00:16:32.668 "data_offset": 2048, 00:16:32.668 "data_size": 63488 00:16:32.668 } 00:16:32.668 ] 00:16:32.668 }' 00:16:32.668 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.668 10:20:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.234 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:33.234 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:33.234 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:16:33.234 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:33.527 [2024-06-10 10:20:38.930769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:33.527 [2024-06-10 10:20:38.930830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.527 [2024-06-10 10:20:38.930839] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bcfc180 00:16:33.527 [2024-06-10 10:20:38.930845] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.527 [2024-06-10 10:20:38.930917] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.527 [2024-06-10 10:20:38.930939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:33.527 [2024-06-10 10:20:38.930954] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:33.527 [2024-06-10 10:20:38.930959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:33.527 [2024-06-10 10:20:38.930979] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bcfb780 00:16:33.527 [2024-06-10 10:20:38.930983] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:33.527 [2024-06-10 10:20:38.931000] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bd5ee20 00:16:33.527 [2024-06-10 10:20:38.931032] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bcfb780 00:16:33.527 [2024-06-10 10:20:38.931035] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bcfb780 00:16:33.527 [2024-06-10 10:20:38.931050] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.527 pt4 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.527 10:20:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.785 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.785 "name": "raid_bdev1", 00:16:33.785 "uuid": "0bb5cbbf-2713-11ef-b084-113036b5c18d", 00:16:33.785 "strip_size_kb": 0, 00:16:33.785 "state": "online", 00:16:33.785 "raid_level": "raid1", 00:16:33.785 "superblock": true, 00:16:33.785 "num_base_bdevs": 4, 00:16:33.785 "num_base_bdevs_discovered": 3, 00:16:33.785 "num_base_bdevs_operational": 3, 00:16:33.785 "base_bdevs_list": [ 00:16:33.785 { 00:16:33.785 "name": null, 00:16:33.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.785 "is_configured": false, 00:16:33.785 "data_offset": 2048, 00:16:33.785 "data_size": 63488 00:16:33.785 }, 00:16:33.785 { 00:16:33.785 "name": "pt2", 00:16:33.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.785 "is_configured": true, 00:16:33.785 "data_offset": 2048, 00:16:33.785 "data_size": 63488 00:16:33.785 }, 00:16:33.785 { 00:16:33.785 "name": "pt3", 00:16:33.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.785 "is_configured": true, 00:16:33.785 "data_offset": 2048, 00:16:33.785 "data_size": 63488 00:16:33.785 }, 00:16:33.785 { 00:16:33.785 "name": "pt4", 00:16:33.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.785 "is_configured": true, 00:16:33.785 "data_offset": 2048, 00:16:33.785 "data_size": 63488 00:16:33.785 } 00:16:33.785 ] 00:16:33.785 }' 00:16:33.785 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.785 10:20:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.043 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:34.043 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:34.300 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:34.300 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:34.300 10:20:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:34.558 [2024-06-10 10:20:39.994853] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 0bb5cbbf-2713-11ef-b084-113036b5c18d '!=' 0bb5cbbf-2713-11ef-b084-113036b5c18d ']' 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 65422 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@949 -- # '[' -z 65422 ']' 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # kill -0 65422 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # uname 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # tail -1 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # ps -c -o command 65422 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:16:34.558 killing process with pid 65422 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 65422' 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # kill 65422 00:16:34.558 [2024-06-10 10:20:40.024604] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.558 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # wait 65422 00:16:34.558 [2024-06-10 10:20:40.024635] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.558 [2024-06-10 10:20:40.024653] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.558 [2024-06-10 10:20:40.024658] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bcfb780 name raid_bdev1, state offline 00:16:34.558 [2024-06-10 10:20:40.043850] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.817 10:20:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:34.817 00:16:34.817 real 0m20.754s 00:16:34.817 user 0m37.887s 00:16:34.817 sys 0m2.830s 00:16:34.817 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:34.817 10:20:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.817 ************************************ 00:16:34.817 END TEST raid_superblock_test 00:16:34.817 ************************************ 00:16:34.817 10:20:40 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:34.817 10:20:40 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:34.817 10:20:40 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:34.817 10:20:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.817 ************************************ 00:16:34.817 START TEST raid_read_error_test 00:16:34.817 ************************************ 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 4 read 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.XAru5WnO 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=66058 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 66058 /var/tmp/spdk-raid.sock 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@830 -- # '[' -z 66058 ']' 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:34.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:34.817 10:20:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.817 [2024-06-10 10:20:40.273776] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:34.817 [2024-06-10 10:20:40.273937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:35.382 EAL: TSC is not safe to use in SMP mode 00:16:35.382 EAL: TSC is not invariant 00:16:35.382 [2024-06-10 10:20:40.726100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.382 [2024-06-10 10:20:40.820534] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:35.382 [2024-06-10 10:20:40.823181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.382 [2024-06-10 10:20:40.824066] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.382 [2024-06-10 10:20:40.824080] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.947 10:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:35.947 10:20:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@863 -- # return 0 00:16:35.947 10:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:35.947 10:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:35.947 BaseBdev1_malloc 00:16:35.947 10:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:36.205 true 00:16:36.205 10:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:36.462 [2024-06-10 10:20:41.964125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:36.462 [2024-06-10 10:20:41.964194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.463 [2024-06-10 10:20:41.964222] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1a5780 00:16:36.463 [2024-06-10 10:20:41.964230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.463 [2024-06-10 10:20:41.964749] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.463 [2024-06-10 10:20:41.964781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.463 BaseBdev1 00:16:36.463 10:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:36.463 10:20:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:36.724 BaseBdev2_malloc 00:16:36.724 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:36.982 true 00:16:36.982 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:37.240 [2024-06-10 10:20:42.728171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:37.240 [2024-06-10 10:20:42.728252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.240 [2024-06-10 10:20:42.728299] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1a5c80 00:16:37.240 [2024-06-10 10:20:42.728322] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.240 [2024-06-10 10:20:42.728863] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.240 [2024-06-10 10:20:42.728892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:37.240 BaseBdev2 00:16:37.240 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:37.240 10:20:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:37.497 BaseBdev3_malloc 00:16:37.497 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:38.063 true 00:16:38.063 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:38.321 [2024-06-10 10:20:43.680209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:38.321 [2024-06-10 10:20:43.680280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.321 [2024-06-10 10:20:43.680309] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1a6180 00:16:38.321 [2024-06-10 10:20:43.680318] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.321 [2024-06-10 10:20:43.680862] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.321 [2024-06-10 10:20:43.680893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:38.321 BaseBdev3 00:16:38.321 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:38.321 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:38.579 BaseBdev4_malloc 00:16:38.579 10:20:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:38.837 true 00:16:38.837 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:38.837 [2024-06-10 10:20:44.444221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:38.837 [2024-06-10 10:20:44.444284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.837 [2024-06-10 10:20:44.444312] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d1a6680 00:16:38.837 [2024-06-10 10:20:44.444320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.095 [2024-06-10 10:20:44.444889] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.095 [2024-06-10 10:20:44.444929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:39.095 BaseBdev4 00:16:39.095 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:39.095 [2024-06-10 10:20:44.668260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.095 [2024-06-10 10:20:44.668752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.095 [2024-06-10 10:20:44.668773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.095 [2024-06-10 10:20:44.668785] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:39.095 [2024-06-10 10:20:44.668849] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d1a6900 00:16:39.095 [2024-06-10 10:20:44.668854] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:39.095 [2024-06-10 10:20:44.668889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d211e20 00:16:39.095 [2024-06-10 10:20:44.668953] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d1a6900 00:16:39.095 [2024-06-10 10:20:44.668957] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d1a6900 00:16:39.095 [2024-06-10 10:20:44.668982] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.095 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.096 10:20:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.662 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.662 "name": "raid_bdev1", 00:16:39.662 "uuid": "18db9d75-2713-11ef-b084-113036b5c18d", 00:16:39.662 "strip_size_kb": 0, 00:16:39.662 "state": "online", 00:16:39.662 "raid_level": "raid1", 00:16:39.662 "superblock": true, 00:16:39.662 "num_base_bdevs": 4, 00:16:39.662 "num_base_bdevs_discovered": 4, 00:16:39.662 "num_base_bdevs_operational": 4, 00:16:39.662 "base_bdevs_list": [ 00:16:39.662 { 00:16:39.662 "name": "BaseBdev1", 00:16:39.662 "uuid": "1e4e7fd0-5944-9c5d-b471-961bbc6e9b79", 00:16:39.662 "is_configured": true, 00:16:39.662 "data_offset": 2048, 00:16:39.662 "data_size": 63488 00:16:39.662 }, 00:16:39.662 { 00:16:39.662 "name": "BaseBdev2", 00:16:39.662 "uuid": "5d362ced-9d90-2258-85cf-a443a01a3c50", 00:16:39.662 "is_configured": true, 00:16:39.662 "data_offset": 2048, 00:16:39.662 "data_size": 63488 00:16:39.662 }, 00:16:39.662 { 00:16:39.662 "name": "BaseBdev3", 00:16:39.662 "uuid": "7e8eec10-a8d6-6753-8700-2ce640af3777", 00:16:39.662 "is_configured": true, 00:16:39.662 "data_offset": 2048, 00:16:39.662 "data_size": 63488 00:16:39.662 }, 00:16:39.662 { 00:16:39.662 "name": "BaseBdev4", 00:16:39.662 "uuid": "2a433473-bc1e-b55a-9dc2-970e10b4efdb", 00:16:39.662 "is_configured": true, 00:16:39.662 "data_offset": 2048, 00:16:39.663 "data_size": 63488 00:16:39.663 } 00:16:39.663 ] 00:16:39.663 }' 00:16:39.663 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.663 10:20:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.921 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:39.921 10:20:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:39.921 [2024-06-10 10:20:45.524335] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d211ec0 00:16:40.858 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.117 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.376 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.376 "name": "raid_bdev1", 00:16:41.376 "uuid": "18db9d75-2713-11ef-b084-113036b5c18d", 00:16:41.376 "strip_size_kb": 0, 00:16:41.376 "state": "online", 00:16:41.376 "raid_level": "raid1", 00:16:41.376 "superblock": true, 00:16:41.376 "num_base_bdevs": 4, 00:16:41.376 "num_base_bdevs_discovered": 4, 00:16:41.376 "num_base_bdevs_operational": 4, 00:16:41.376 "base_bdevs_list": [ 00:16:41.376 { 00:16:41.376 "name": "BaseBdev1", 00:16:41.376 "uuid": "1e4e7fd0-5944-9c5d-b471-961bbc6e9b79", 00:16:41.376 "is_configured": true, 00:16:41.376 "data_offset": 2048, 00:16:41.376 "data_size": 63488 00:16:41.376 }, 00:16:41.376 { 00:16:41.376 "name": "BaseBdev2", 00:16:41.376 "uuid": "5d362ced-9d90-2258-85cf-a443a01a3c50", 00:16:41.376 "is_configured": true, 00:16:41.376 "data_offset": 2048, 00:16:41.376 "data_size": 63488 00:16:41.376 }, 00:16:41.376 { 00:16:41.376 "name": "BaseBdev3", 00:16:41.376 "uuid": "7e8eec10-a8d6-6753-8700-2ce640af3777", 00:16:41.376 "is_configured": true, 00:16:41.376 "data_offset": 2048, 00:16:41.376 "data_size": 63488 00:16:41.376 }, 00:16:41.376 { 00:16:41.376 "name": "BaseBdev4", 00:16:41.376 "uuid": "2a433473-bc1e-b55a-9dc2-970e10b4efdb", 00:16:41.376 "is_configured": true, 00:16:41.376 "data_offset": 2048, 00:16:41.376 "data_size": 63488 00:16:41.376 } 00:16:41.376 ] 00:16:41.376 }' 00:16:41.376 10:20:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.376 10:20:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:41.943 [2024-06-10 10:20:47.501829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.943 [2024-06-10 10:20:47.501857] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.943 [2024-06-10 10:20:47.502174] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.943 [2024-06-10 10:20:47.502193] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.943 [2024-06-10 10:20:47.502215] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.943 [2024-06-10 10:20:47.502220] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d1a6900 name raid_bdev1, state offline 00:16:41.943 0 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 66058 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@949 -- # '[' -z 66058 ']' 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # kill -0 66058 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # uname 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 66058 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # tail -1 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:16:41.943 killing process with pid 66058 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66058' 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # kill 66058 00:16:41.943 [2024-06-10 10:20:47.527042] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.943 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # wait 66058 00:16:41.943 [2024-06-10 10:20:47.546240] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.XAru5WnO 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:42.203 00:16:42.203 real 0m7.467s 00:16:42.203 user 0m12.242s 00:16:42.203 sys 0m1.053s 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:42.203 ************************************ 00:16:42.203 10:20:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.203 END TEST raid_read_error_test 00:16:42.203 ************************************ 00:16:42.203 10:20:47 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:42.203 10:20:47 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:42.203 10:20:47 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:42.203 10:20:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.203 ************************************ 00:16:42.203 START TEST raid_write_error_test 00:16:42.203 ************************************ 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # raid_io_error_test raid1 4 write 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.A3ltBqj9 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=66196 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 66196 /var/tmp/spdk-raid.sock 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@830 -- # '[' -z 66196 ']' 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:42.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:42.203 10:20:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.203 [2024-06-10 10:20:47.790850] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:42.203 [2024-06-10 10:20:47.791099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:42.768 EAL: TSC is not safe to use in SMP mode 00:16:42.768 EAL: TSC is not invariant 00:16:42.768 [2024-06-10 10:20:48.334545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.025 [2024-06-10 10:20:48.418555] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:43.025 [2024-06-10 10:20:48.420852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.025 [2024-06-10 10:20:48.422748] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.025 [2024-06-10 10:20:48.422760] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.283 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:43.283 10:20:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@863 -- # return 0 00:16:43.283 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:43.283 10:20:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:43.541 BaseBdev1_malloc 00:16:43.541 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:43.799 true 00:16:43.799 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:44.057 [2024-06-10 10:20:49.621570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:44.057 [2024-06-10 10:20:49.621632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.057 [2024-06-10 10:20:49.621657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829d14780 00:16:44.057 [2024-06-10 10:20:49.621664] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.057 [2024-06-10 10:20:49.622206] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.057 [2024-06-10 10:20:49.622235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:44.057 BaseBdev1 00:16:44.057 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:44.057 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:44.315 BaseBdev2_malloc 00:16:44.316 10:20:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:44.574 true 00:16:44.574 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:44.832 [2024-06-10 10:20:50.409598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:44.832 [2024-06-10 10:20:50.409659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.832 [2024-06-10 10:20:50.409686] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829d14c80 00:16:44.832 [2024-06-10 10:20:50.409694] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.832 [2024-06-10 10:20:50.410262] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.832 [2024-06-10 10:20:50.410291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:44.832 BaseBdev2 00:16:45.090 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:45.090 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:45.090 BaseBdev3_malloc 00:16:45.090 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:45.349 true 00:16:45.349 10:20:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:45.608 [2024-06-10 10:20:51.161604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:45.608 [2024-06-10 10:20:51.161656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.608 [2024-06-10 10:20:51.161679] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829d15180 00:16:45.608 [2024-06-10 10:20:51.161687] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.608 [2024-06-10 10:20:51.162154] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.608 [2024-06-10 10:20:51.162183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:45.608 BaseBdev3 00:16:45.608 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:45.608 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:45.934 BaseBdev4_malloc 00:16:45.934 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:46.193 true 00:16:46.193 10:20:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:46.760 [2024-06-10 10:20:52.125667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:46.760 [2024-06-10 10:20:52.125725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.760 [2024-06-10 10:20:52.125751] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829d15680 00:16:46.760 [2024-06-10 10:20:52.125759] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.760 [2024-06-10 10:20:52.126304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.760 [2024-06-10 10:20:52.126338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:46.760 BaseBdev4 00:16:46.760 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:47.018 [2024-06-10 10:20:52.397682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.018 [2024-06-10 10:20:52.398140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.018 [2024-06-10 10:20:52.398165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:47.018 [2024-06-10 10:20:52.398178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:47.018 [2024-06-10 10:20:52.398237] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x829d15900 00:16:47.018 [2024-06-10 10:20:52.398242] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:47.018 [2024-06-10 10:20:52.398275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829d80e20 00:16:47.018 [2024-06-10 10:20:52.398339] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829d15900 00:16:47.018 [2024-06-10 10:20:52.398342] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829d15900 00:16:47.018 [2024-06-10 10:20:52.398365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.018 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.277 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.277 "name": "raid_bdev1", 00:16:47.277 "uuid": "1d770816-2713-11ef-b084-113036b5c18d", 00:16:47.277 "strip_size_kb": 0, 00:16:47.277 "state": "online", 00:16:47.277 "raid_level": "raid1", 00:16:47.277 "superblock": true, 00:16:47.277 "num_base_bdevs": 4, 00:16:47.277 "num_base_bdevs_discovered": 4, 00:16:47.277 "num_base_bdevs_operational": 4, 00:16:47.277 "base_bdevs_list": [ 00:16:47.277 { 00:16:47.277 "name": "BaseBdev1", 00:16:47.277 "uuid": "ac94897b-2d34-c556-a546-e183595a494f", 00:16:47.277 "is_configured": true, 00:16:47.277 "data_offset": 2048, 00:16:47.277 "data_size": 63488 00:16:47.277 }, 00:16:47.277 { 00:16:47.277 "name": "BaseBdev2", 00:16:47.277 "uuid": "a6e841a3-9c18-3f5b-a28e-3e6a88b0b7ef", 00:16:47.277 "is_configured": true, 00:16:47.277 "data_offset": 2048, 00:16:47.277 "data_size": 63488 00:16:47.277 }, 00:16:47.277 { 00:16:47.277 "name": "BaseBdev3", 00:16:47.277 "uuid": "a3df30ac-6e5f-2659-8550-059fda70acd5", 00:16:47.277 "is_configured": true, 00:16:47.277 "data_offset": 2048, 00:16:47.277 "data_size": 63488 00:16:47.277 }, 00:16:47.277 { 00:16:47.277 "name": "BaseBdev4", 00:16:47.277 "uuid": "c059a9c7-5fa0-de5b-a2d7-f09b63771104", 00:16:47.277 "is_configured": true, 00:16:47.277 "data_offset": 2048, 00:16:47.277 "data_size": 63488 00:16:47.277 } 00:16:47.277 ] 00:16:47.277 }' 00:16:47.277 10:20:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.277 10:20:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.535 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:47.535 10:20:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:47.793 [2024-06-10 10:20:53.201790] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829d80ec0 00:16:48.726 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:48.985 [2024-06-10 10:20:54.488741] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:48.985 [2024-06-10 10:20:54.488801] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:48.985 [2024-06-10 10:20:54.488931] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x829d80ec0 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.985 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.243 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.243 "name": "raid_bdev1", 00:16:49.243 "uuid": "1d770816-2713-11ef-b084-113036b5c18d", 00:16:49.243 "strip_size_kb": 0, 00:16:49.243 "state": "online", 00:16:49.243 "raid_level": "raid1", 00:16:49.243 "superblock": true, 00:16:49.243 "num_base_bdevs": 4, 00:16:49.243 "num_base_bdevs_discovered": 3, 00:16:49.243 "num_base_bdevs_operational": 3, 00:16:49.243 "base_bdevs_list": [ 00:16:49.243 { 00:16:49.243 "name": null, 00:16:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.243 "is_configured": false, 00:16:49.243 "data_offset": 2048, 00:16:49.243 "data_size": 63488 00:16:49.243 }, 00:16:49.243 { 00:16:49.243 "name": "BaseBdev2", 00:16:49.243 "uuid": "a6e841a3-9c18-3f5b-a28e-3e6a88b0b7ef", 00:16:49.243 "is_configured": true, 00:16:49.243 "data_offset": 2048, 00:16:49.243 "data_size": 63488 00:16:49.243 }, 00:16:49.243 { 00:16:49.243 "name": "BaseBdev3", 00:16:49.243 "uuid": "a3df30ac-6e5f-2659-8550-059fda70acd5", 00:16:49.243 "is_configured": true, 00:16:49.243 "data_offset": 2048, 00:16:49.243 "data_size": 63488 00:16:49.243 }, 00:16:49.243 { 00:16:49.243 "name": "BaseBdev4", 00:16:49.243 "uuid": "c059a9c7-5fa0-de5b-a2d7-f09b63771104", 00:16:49.243 "is_configured": true, 00:16:49.243 "data_offset": 2048, 00:16:49.243 "data_size": 63488 00:16:49.243 } 00:16:49.243 ] 00:16:49.243 }' 00:16:49.243 10:20:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.243 10:20:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.501 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:49.758 [2024-06-10 10:20:55.362362] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.758 [2024-06-10 10:20:55.362395] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.759 [2024-06-10 10:20:55.362784] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.759 [2024-06-10 10:20:55.362795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.759 [2024-06-10 10:20:55.362814] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.759 [2024-06-10 10:20:55.362819] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829d15900 name raid_bdev1, state offline 00:16:50.017 0 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 66196 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@949 -- # '[' -z 66196 ']' 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # kill -0 66196 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # uname 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # ps -c -o command 66196 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # tail -1 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:16:50.017 killing process with pid 66196 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66196' 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # kill 66196 00:16:50.017 [2024-06-10 10:20:55.393259] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # wait 66196 00:16:50.017 [2024-06-10 10:20:55.412911] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.A3ltBqj9 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:50.017 00:16:50.017 real 0m7.823s 00:16:50.017 user 0m12.778s 00:16:50.017 sys 0m1.187s 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:50.017 10:20:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.017 ************************************ 00:16:50.017 END TEST raid_write_error_test 00:16:50.017 ************************************ 00:16:50.275 10:20:55 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:16:50.275 10:20:55 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:16:50.275 10:20:55 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:16:50.275 10:20:55 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:50.275 10:20:55 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:16:50.275 10:20:55 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:50.275 10:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.275 ************************************ 00:16:50.275 START TEST raid_state_function_test_sb_4k 00:16:50.275 ************************************ 00:16:50.275 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:16:50.275 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:50.275 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:50.275 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=66332 00:16:50.276 Process raid pid: 66332 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66332' 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 66332 /var/tmp/spdk-raid.sock 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@830 -- # '[' -z 66332 ']' 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:50.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:50.276 10:20:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.276 [2024-06-10 10:20:55.653528] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:50.276 [2024-06-10 10:20:55.653694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:16:50.543 EAL: TSC is not safe to use in SMP mode 00:16:50.543 EAL: TSC is not invariant 00:16:50.543 [2024-06-10 10:20:56.135116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.817 [2024-06-10 10:20:56.217659] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:50.817 [2024-06-10 10:20:56.219825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.817 [2024-06-10 10:20:56.220550] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.817 [2024-06-10 10:20:56.220563] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.384 10:20:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:51.384 10:20:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@863 -- # return 0 00:16:51.384 10:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:51.644 [2024-06-10 10:20:57.039537] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.644 [2024-06-10 10:20:57.039588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.644 [2024-06-10 10:20:57.039593] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.644 [2024-06-10 10:20:57.039601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.644 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.903 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.903 "name": "Existed_Raid", 00:16:51.903 "uuid": "203b52a5-2713-11ef-b084-113036b5c18d", 00:16:51.903 "strip_size_kb": 0, 00:16:51.903 "state": "configuring", 00:16:51.903 "raid_level": "raid1", 00:16:51.903 "superblock": true, 00:16:51.903 "num_base_bdevs": 2, 00:16:51.903 "num_base_bdevs_discovered": 0, 00:16:51.903 "num_base_bdevs_operational": 2, 00:16:51.903 "base_bdevs_list": [ 00:16:51.903 { 00:16:51.903 "name": "BaseBdev1", 00:16:51.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.903 "is_configured": false, 00:16:51.903 "data_offset": 0, 00:16:51.903 "data_size": 0 00:16:51.903 }, 00:16:51.903 { 00:16:51.903 "name": "BaseBdev2", 00:16:51.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.903 "is_configured": false, 00:16:51.903 "data_offset": 0, 00:16:51.903 "data_size": 0 00:16:51.903 } 00:16:51.903 ] 00:16:51.903 }' 00:16:51.903 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.903 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.161 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.420 [2024-06-10 10:20:57.939555] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.420 [2024-06-10 10:20:57.939578] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be64500 name Existed_Raid, state configuring 00:16:52.420 10:20:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:52.679 [2024-06-10 10:20:58.215587] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.679 [2024-06-10 10:20:58.215645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.679 [2024-06-10 10:20:58.215650] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.679 [2024-06-10 10:20:58.215658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.679 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:16:52.937 [2024-06-10 10:20:58.508622] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.937 BaseBdev1 00:16:52.937 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:52.937 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:16:52.937 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:52.937 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local i 00:16:52.938 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:52.938 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:52.938 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.505 10:20:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.505 [ 00:16:53.505 { 00:16:53.505 "name": "BaseBdev1", 00:16:53.505 "aliases": [ 00:16:53.505 "211b5611-2713-11ef-b084-113036b5c18d" 00:16:53.505 ], 00:16:53.505 "product_name": "Malloc disk", 00:16:53.505 "block_size": 4096, 00:16:53.505 "num_blocks": 8192, 00:16:53.505 "uuid": "211b5611-2713-11ef-b084-113036b5c18d", 00:16:53.505 "assigned_rate_limits": { 00:16:53.505 "rw_ios_per_sec": 0, 00:16:53.505 "rw_mbytes_per_sec": 0, 00:16:53.505 "r_mbytes_per_sec": 0, 00:16:53.505 "w_mbytes_per_sec": 0 00:16:53.505 }, 00:16:53.505 "claimed": true, 00:16:53.505 "claim_type": "exclusive_write", 00:16:53.505 "zoned": false, 00:16:53.505 "supported_io_types": { 00:16:53.505 "read": true, 00:16:53.505 "write": true, 00:16:53.505 "unmap": true, 00:16:53.505 "write_zeroes": true, 00:16:53.505 "flush": true, 00:16:53.505 "reset": true, 00:16:53.505 "compare": false, 00:16:53.505 "compare_and_write": false, 00:16:53.505 "abort": true, 00:16:53.505 "nvme_admin": false, 00:16:53.505 "nvme_io": false 00:16:53.505 }, 00:16:53.505 "memory_domains": [ 00:16:53.505 { 00:16:53.505 "dma_device_id": "system", 00:16:53.505 "dma_device_type": 1 00:16:53.505 }, 00:16:53.505 { 00:16:53.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.505 "dma_device_type": 2 00:16:53.505 } 00:16:53.505 ], 00:16:53.505 "driver_specific": {} 00:16:53.505 } 00:16:53.505 ] 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # return 0 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.505 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.071 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.071 "name": "Existed_Raid", 00:16:54.071 "uuid": "20eec633-2713-11ef-b084-113036b5c18d", 00:16:54.071 "strip_size_kb": 0, 00:16:54.071 "state": "configuring", 00:16:54.071 "raid_level": "raid1", 00:16:54.071 "superblock": true, 00:16:54.071 "num_base_bdevs": 2, 00:16:54.071 "num_base_bdevs_discovered": 1, 00:16:54.071 "num_base_bdevs_operational": 2, 00:16:54.071 "base_bdevs_list": [ 00:16:54.071 { 00:16:54.071 "name": "BaseBdev1", 00:16:54.071 "uuid": "211b5611-2713-11ef-b084-113036b5c18d", 00:16:54.071 "is_configured": true, 00:16:54.071 "data_offset": 256, 00:16:54.071 "data_size": 7936 00:16:54.071 }, 00:16:54.071 { 00:16:54.071 "name": "BaseBdev2", 00:16:54.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.071 "is_configured": false, 00:16:54.071 "data_offset": 0, 00:16:54.071 "data_size": 0 00:16:54.071 } 00:16:54.071 ] 00:16:54.071 }' 00:16:54.071 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.071 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.329 10:20:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.589 [2024-06-10 10:21:00.103664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.589 [2024-06-10 10:21:00.103698] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be64500 name Existed_Raid, state configuring 00:16:54.589 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:54.849 [2024-06-10 10:21:00.427703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.849 [2024-06-10 10:21:00.428436] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.849 [2024-06-10 10:21:00.428478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.849 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:54.849 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:54.849 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:54.849 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.107 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.107 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.107 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.107 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:55.107 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.107 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.108 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.108 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.108 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.108 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.365 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.365 "name": "Existed_Raid", 00:16:55.365 "uuid": "224050c3-2713-11ef-b084-113036b5c18d", 00:16:55.365 "strip_size_kb": 0, 00:16:55.365 "state": "configuring", 00:16:55.365 "raid_level": "raid1", 00:16:55.365 "superblock": true, 00:16:55.365 "num_base_bdevs": 2, 00:16:55.365 "num_base_bdevs_discovered": 1, 00:16:55.365 "num_base_bdevs_operational": 2, 00:16:55.365 "base_bdevs_list": [ 00:16:55.365 { 00:16:55.365 "name": "BaseBdev1", 00:16:55.365 "uuid": "211b5611-2713-11ef-b084-113036b5c18d", 00:16:55.365 "is_configured": true, 00:16:55.365 "data_offset": 256, 00:16:55.365 "data_size": 7936 00:16:55.365 }, 00:16:55.365 { 00:16:55.365 "name": "BaseBdev2", 00:16:55.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.365 "is_configured": false, 00:16:55.365 "data_offset": 0, 00:16:55.365 "data_size": 0 00:16:55.365 } 00:16:55.365 ] 00:16:55.365 }' 00:16:55.365 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.365 10:21:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.622 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:16:55.879 [2024-06-10 10:21:01.311817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.879 [2024-06-10 10:21:01.311867] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be64a00 00:16:55.879 [2024-06-10 10:21:01.311872] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:55.879 [2024-06-10 10:21:01.311890] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bec7ec0 00:16:55.879 [2024-06-10 10:21:01.311924] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be64a00 00:16:55.879 [2024-06-10 10:21:01.311927] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82be64a00 00:16:55.879 [2024-06-10 10:21:01.311944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.879 BaseBdev2 00:16:55.879 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:55.880 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:16:55.880 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:16:55.880 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local i 00:16:55.880 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:16:55.880 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:16:55.880 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.137 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.395 [ 00:16:56.395 { 00:16:56.395 "name": "BaseBdev2", 00:16:56.395 "aliases": [ 00:16:56.395 "22c734a7-2713-11ef-b084-113036b5c18d" 00:16:56.395 ], 00:16:56.395 "product_name": "Malloc disk", 00:16:56.395 "block_size": 4096, 00:16:56.395 "num_blocks": 8192, 00:16:56.395 "uuid": "22c734a7-2713-11ef-b084-113036b5c18d", 00:16:56.395 "assigned_rate_limits": { 00:16:56.395 "rw_ios_per_sec": 0, 00:16:56.395 "rw_mbytes_per_sec": 0, 00:16:56.395 "r_mbytes_per_sec": 0, 00:16:56.395 "w_mbytes_per_sec": 0 00:16:56.395 }, 00:16:56.395 "claimed": true, 00:16:56.395 "claim_type": "exclusive_write", 00:16:56.395 "zoned": false, 00:16:56.395 "supported_io_types": { 00:16:56.395 "read": true, 00:16:56.395 "write": true, 00:16:56.395 "unmap": true, 00:16:56.395 "write_zeroes": true, 00:16:56.395 "flush": true, 00:16:56.395 "reset": true, 00:16:56.395 "compare": false, 00:16:56.395 "compare_and_write": false, 00:16:56.395 "abort": true, 00:16:56.395 "nvme_admin": false, 00:16:56.395 "nvme_io": false 00:16:56.395 }, 00:16:56.395 "memory_domains": [ 00:16:56.395 { 00:16:56.395 "dma_device_id": "system", 00:16:56.395 "dma_device_type": 1 00:16:56.395 }, 00:16:56.395 { 00:16:56.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.395 "dma_device_type": 2 00:16:56.395 } 00:16:56.395 ], 00:16:56.395 "driver_specific": {} 00:16:56.395 } 00:16:56.395 ] 00:16:56.395 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # return 0 00:16:56.395 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.396 10:21:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.654 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.654 "name": "Existed_Raid", 00:16:56.654 "uuid": "224050c3-2713-11ef-b084-113036b5c18d", 00:16:56.654 "strip_size_kb": 0, 00:16:56.654 "state": "online", 00:16:56.654 "raid_level": "raid1", 00:16:56.654 "superblock": true, 00:16:56.654 "num_base_bdevs": 2, 00:16:56.654 "num_base_bdevs_discovered": 2, 00:16:56.654 "num_base_bdevs_operational": 2, 00:16:56.654 "base_bdevs_list": [ 00:16:56.654 { 00:16:56.654 "name": "BaseBdev1", 00:16:56.654 "uuid": "211b5611-2713-11ef-b084-113036b5c18d", 00:16:56.654 "is_configured": true, 00:16:56.654 "data_offset": 256, 00:16:56.654 "data_size": 7936 00:16:56.654 }, 00:16:56.654 { 00:16:56.654 "name": "BaseBdev2", 00:16:56.654 "uuid": "22c734a7-2713-11ef-b084-113036b5c18d", 00:16:56.654 "is_configured": true, 00:16:56.654 "data_offset": 256, 00:16:56.654 "data_size": 7936 00:16:56.654 } 00:16:56.654 ] 00:16:56.654 }' 00:16:56.654 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.654 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.912 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:56.912 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:56.912 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:56.912 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:56.912 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:56.913 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:16:56.913 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:56.913 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:57.170 [2024-06-10 10:21:02.667807] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.170 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:57.170 "name": "Existed_Raid", 00:16:57.170 "aliases": [ 00:16:57.170 "224050c3-2713-11ef-b084-113036b5c18d" 00:16:57.170 ], 00:16:57.170 "product_name": "Raid Volume", 00:16:57.170 "block_size": 4096, 00:16:57.170 "num_blocks": 7936, 00:16:57.170 "uuid": "224050c3-2713-11ef-b084-113036b5c18d", 00:16:57.170 "assigned_rate_limits": { 00:16:57.170 "rw_ios_per_sec": 0, 00:16:57.170 "rw_mbytes_per_sec": 0, 00:16:57.170 "r_mbytes_per_sec": 0, 00:16:57.170 "w_mbytes_per_sec": 0 00:16:57.170 }, 00:16:57.170 "claimed": false, 00:16:57.170 "zoned": false, 00:16:57.170 "supported_io_types": { 00:16:57.170 "read": true, 00:16:57.170 "write": true, 00:16:57.170 "unmap": false, 00:16:57.170 "write_zeroes": true, 00:16:57.170 "flush": false, 00:16:57.170 "reset": true, 00:16:57.170 "compare": false, 00:16:57.170 "compare_and_write": false, 00:16:57.170 "abort": false, 00:16:57.170 "nvme_admin": false, 00:16:57.170 "nvme_io": false 00:16:57.170 }, 00:16:57.170 "memory_domains": [ 00:16:57.170 { 00:16:57.170 "dma_device_id": "system", 00:16:57.170 "dma_device_type": 1 00:16:57.170 }, 00:16:57.170 { 00:16:57.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.170 "dma_device_type": 2 00:16:57.170 }, 00:16:57.170 { 00:16:57.170 "dma_device_id": "system", 00:16:57.170 "dma_device_type": 1 00:16:57.170 }, 00:16:57.170 { 00:16:57.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.170 "dma_device_type": 2 00:16:57.170 } 00:16:57.170 ], 00:16:57.170 "driver_specific": { 00:16:57.170 "raid": { 00:16:57.170 "uuid": "224050c3-2713-11ef-b084-113036b5c18d", 00:16:57.170 "strip_size_kb": 0, 00:16:57.170 "state": "online", 00:16:57.170 "raid_level": "raid1", 00:16:57.170 "superblock": true, 00:16:57.170 "num_base_bdevs": 2, 00:16:57.170 "num_base_bdevs_discovered": 2, 00:16:57.170 "num_base_bdevs_operational": 2, 00:16:57.170 "base_bdevs_list": [ 00:16:57.171 { 00:16:57.171 "name": "BaseBdev1", 00:16:57.171 "uuid": "211b5611-2713-11ef-b084-113036b5c18d", 00:16:57.171 "is_configured": true, 00:16:57.171 "data_offset": 256, 00:16:57.171 "data_size": 7936 00:16:57.171 }, 00:16:57.171 { 00:16:57.171 "name": "BaseBdev2", 00:16:57.171 "uuid": "22c734a7-2713-11ef-b084-113036b5c18d", 00:16:57.171 "is_configured": true, 00:16:57.171 "data_offset": 256, 00:16:57.171 "data_size": 7936 00:16:57.171 } 00:16:57.171 ] 00:16:57.171 } 00:16:57.171 } 00:16:57.171 }' 00:16:57.171 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.171 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:57.171 BaseBdev2' 00:16:57.171 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.171 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:57.171 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:57.429 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:57.429 "name": "BaseBdev1", 00:16:57.429 "aliases": [ 00:16:57.429 "211b5611-2713-11ef-b084-113036b5c18d" 00:16:57.429 ], 00:16:57.429 "product_name": "Malloc disk", 00:16:57.429 "block_size": 4096, 00:16:57.429 "num_blocks": 8192, 00:16:57.429 "uuid": "211b5611-2713-11ef-b084-113036b5c18d", 00:16:57.429 "assigned_rate_limits": { 00:16:57.429 "rw_ios_per_sec": 0, 00:16:57.429 "rw_mbytes_per_sec": 0, 00:16:57.429 "r_mbytes_per_sec": 0, 00:16:57.429 "w_mbytes_per_sec": 0 00:16:57.429 }, 00:16:57.429 "claimed": true, 00:16:57.429 "claim_type": "exclusive_write", 00:16:57.429 "zoned": false, 00:16:57.429 "supported_io_types": { 00:16:57.429 "read": true, 00:16:57.429 "write": true, 00:16:57.429 "unmap": true, 00:16:57.429 "write_zeroes": true, 00:16:57.429 "flush": true, 00:16:57.429 "reset": true, 00:16:57.429 "compare": false, 00:16:57.429 "compare_and_write": false, 00:16:57.429 "abort": true, 00:16:57.429 "nvme_admin": false, 00:16:57.429 "nvme_io": false 00:16:57.429 }, 00:16:57.429 "memory_domains": [ 00:16:57.429 { 00:16:57.429 "dma_device_id": "system", 00:16:57.429 "dma_device_type": 1 00:16:57.429 }, 00:16:57.429 { 00:16:57.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.429 "dma_device_type": 2 00:16:57.429 } 00:16:57.429 ], 00:16:57.429 "driver_specific": {} 00:16:57.429 }' 00:16:57.429 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.429 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.429 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:57.429 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.429 10:21:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.429 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:57.429 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.429 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.429 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:57.429 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.429 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.687 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.687 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:57.687 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:57.687 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:57.945 "name": "BaseBdev2", 00:16:57.945 "aliases": [ 00:16:57.945 "22c734a7-2713-11ef-b084-113036b5c18d" 00:16:57.945 ], 00:16:57.945 "product_name": "Malloc disk", 00:16:57.945 "block_size": 4096, 00:16:57.945 "num_blocks": 8192, 00:16:57.945 "uuid": "22c734a7-2713-11ef-b084-113036b5c18d", 00:16:57.945 "assigned_rate_limits": { 00:16:57.945 "rw_ios_per_sec": 0, 00:16:57.945 "rw_mbytes_per_sec": 0, 00:16:57.945 "r_mbytes_per_sec": 0, 00:16:57.945 "w_mbytes_per_sec": 0 00:16:57.945 }, 00:16:57.945 "claimed": true, 00:16:57.945 "claim_type": "exclusive_write", 00:16:57.945 "zoned": false, 00:16:57.945 "supported_io_types": { 00:16:57.945 "read": true, 00:16:57.945 "write": true, 00:16:57.945 "unmap": true, 00:16:57.945 "write_zeroes": true, 00:16:57.945 "flush": true, 00:16:57.945 "reset": true, 00:16:57.945 "compare": false, 00:16:57.945 "compare_and_write": false, 00:16:57.945 "abort": true, 00:16:57.945 "nvme_admin": false, 00:16:57.945 "nvme_io": false 00:16:57.945 }, 00:16:57.945 "memory_domains": [ 00:16:57.945 { 00:16:57.945 "dma_device_id": "system", 00:16:57.945 "dma_device_type": 1 00:16:57.945 }, 00:16:57.945 { 00:16:57.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.945 "dma_device_type": 2 00:16:57.945 } 00:16:57.945 ], 00:16:57.945 "driver_specific": {} 00:16:57.945 }' 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.945 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.946 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.946 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:58.203 [2024-06-10 10:21:03.715832] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.203 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.461 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.461 "name": "Existed_Raid", 00:16:58.461 "uuid": "224050c3-2713-11ef-b084-113036b5c18d", 00:16:58.461 "strip_size_kb": 0, 00:16:58.461 "state": "online", 00:16:58.461 "raid_level": "raid1", 00:16:58.461 "superblock": true, 00:16:58.461 "num_base_bdevs": 2, 00:16:58.461 "num_base_bdevs_discovered": 1, 00:16:58.461 "num_base_bdevs_operational": 1, 00:16:58.461 "base_bdevs_list": [ 00:16:58.461 { 00:16:58.461 "name": null, 00:16:58.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.461 "is_configured": false, 00:16:58.461 "data_offset": 256, 00:16:58.461 "data_size": 7936 00:16:58.461 }, 00:16:58.461 { 00:16:58.461 "name": "BaseBdev2", 00:16:58.461 "uuid": "22c734a7-2713-11ef-b084-113036b5c18d", 00:16:58.461 "is_configured": true, 00:16:58.461 "data_offset": 256, 00:16:58.461 "data_size": 7936 00:16:58.461 } 00:16:58.461 ] 00:16:58.461 }' 00:16:58.461 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.461 10:21:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.719 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:58.719 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.719 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:58.719 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.976 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:58.976 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.976 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:59.297 [2024-06-10 10:21:04.708627] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.297 [2024-06-10 10:21:04.708668] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.297 [2024-06-10 10:21:04.713406] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.297 [2024-06-10 10:21:04.713437] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.297 [2024-06-10 10:21:04.713442] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be64a00 name Existed_Raid, state offline 00:16:59.297 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:59.297 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:59.297 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.297 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 66332 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@949 -- # '[' -z 66332 ']' 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # kill -0 66332 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # uname 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # ps -c -o command 66332 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # tail -1 00:16:59.570 10:21:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:16:59.570 killing process with pid 66332 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66332' 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # kill 66332 00:16:59.570 [2024-06-10 10:21:05.000478] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.570 [2024-06-10 10:21:05.000521] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # wait 66332 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:16:59.570 00:16:59.570 real 0m9.525s 00:16:59.570 user 0m16.629s 00:16:59.570 sys 0m1.479s 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:59.570 10:21:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.570 ************************************ 00:16:59.570 END TEST raid_state_function_test_sb_4k 00:16:59.570 ************************************ 00:16:59.829 10:21:05 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:59.829 10:21:05 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:16:59.829 10:21:05 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:59.829 10:21:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.829 ************************************ 00:16:59.829 START TEST raid_superblock_test_4k 00:16:59.829 ************************************ 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=66606 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 66606 /var/tmp/spdk-raid.sock 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@830 -- # '[' -z 66606 ']' 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:59.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:59.829 10:21:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.829 [2024-06-10 10:21:05.228913] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:16:59.829 [2024-06-10 10:21:05.229085] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:00.477 EAL: TSC is not safe to use in SMP mode 00:17:00.477 EAL: TSC is not invariant 00:17:00.477 [2024-06-10 10:21:05.729165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.477 [2024-06-10 10:21:05.810873] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:00.477 [2024-06-10 10:21:05.813001] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.477 [2024-06-10 10:21:05.813682] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.477 [2024-06-10 10:21:05.813694] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@863 -- # return 0 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:00.739 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:17:01.003 malloc1 00:17:01.003 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.369 [2024-06-10 10:21:06.732090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.369 [2024-06-10 10:21:06.732145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.369 [2024-06-10 10:21:06.732161] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e26e780 00:17:01.369 [2024-06-10 10:21:06.732168] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.369 [2024-06-10 10:21:06.732938] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.369 [2024-06-10 10:21:06.732969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.369 pt1 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:01.369 10:21:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:17:01.651 malloc2 00:17:01.651 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.651 [2024-06-10 10:21:07.248095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.651 [2024-06-10 10:21:07.248147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.651 [2024-06-10 10:21:07.248166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e26ec80 00:17:01.651 [2024-06-10 10:21:07.248174] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.651 [2024-06-10 10:21:07.248659] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.651 [2024-06-10 10:21:07.248685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.651 pt2 00:17:01.952 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:01.952 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:01.952 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:02.231 [2024-06-10 10:21:07.608113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.231 [2024-06-10 10:21:07.608570] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.231 [2024-06-10 10:21:07.608622] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e26ef00 00:17:02.231 [2024-06-10 10:21:07.608627] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:02.231 [2024-06-10 10:21:07.608659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e2d1e20 00:17:02.231 [2024-06-10 10:21:07.608712] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e26ef00 00:17:02.231 [2024-06-10 10:21:07.608716] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82e26ef00 00:17:02.231 [2024-06-10 10:21:07.608736] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.231 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.232 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.549 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.549 "name": "raid_bdev1", 00:17:02.549 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:02.549 "strip_size_kb": 0, 00:17:02.549 "state": "online", 00:17:02.549 "raid_level": "raid1", 00:17:02.549 "superblock": true, 00:17:02.549 "num_base_bdevs": 2, 00:17:02.549 "num_base_bdevs_discovered": 2, 00:17:02.549 "num_base_bdevs_operational": 2, 00:17:02.549 "base_bdevs_list": [ 00:17:02.549 { 00:17:02.549 "name": "pt1", 00:17:02.549 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.549 "is_configured": true, 00:17:02.549 "data_offset": 256, 00:17:02.549 "data_size": 7936 00:17:02.549 }, 00:17:02.549 { 00:17:02.549 "name": "pt2", 00:17:02.549 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.549 "is_configured": true, 00:17:02.549 "data_offset": 256, 00:17:02.549 "data_size": 7936 00:17:02.549 } 00:17:02.549 ] 00:17:02.549 }' 00:17:02.549 10:21:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.549 10:21:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:02.884 [2024-06-10 10:21:08.360178] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.884 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:02.884 "name": "raid_bdev1", 00:17:02.884 "aliases": [ 00:17:02.884 "2687f599-2713-11ef-b084-113036b5c18d" 00:17:02.884 ], 00:17:02.884 "product_name": "Raid Volume", 00:17:02.884 "block_size": 4096, 00:17:02.884 "num_blocks": 7936, 00:17:02.884 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:02.884 "assigned_rate_limits": { 00:17:02.884 "rw_ios_per_sec": 0, 00:17:02.884 "rw_mbytes_per_sec": 0, 00:17:02.884 "r_mbytes_per_sec": 0, 00:17:02.884 "w_mbytes_per_sec": 0 00:17:02.884 }, 00:17:02.884 "claimed": false, 00:17:02.884 "zoned": false, 00:17:02.884 "supported_io_types": { 00:17:02.884 "read": true, 00:17:02.884 "write": true, 00:17:02.884 "unmap": false, 00:17:02.884 "write_zeroes": true, 00:17:02.884 "flush": false, 00:17:02.884 "reset": true, 00:17:02.884 "compare": false, 00:17:02.884 "compare_and_write": false, 00:17:02.884 "abort": false, 00:17:02.885 "nvme_admin": false, 00:17:02.885 "nvme_io": false 00:17:02.885 }, 00:17:02.885 "memory_domains": [ 00:17:02.885 { 00:17:02.885 "dma_device_id": "system", 00:17:02.885 "dma_device_type": 1 00:17:02.885 }, 00:17:02.885 { 00:17:02.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.885 "dma_device_type": 2 00:17:02.885 }, 00:17:02.885 { 00:17:02.885 "dma_device_id": "system", 00:17:02.885 "dma_device_type": 1 00:17:02.885 }, 00:17:02.885 { 00:17:02.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.885 "dma_device_type": 2 00:17:02.885 } 00:17:02.885 ], 00:17:02.885 "driver_specific": { 00:17:02.885 "raid": { 00:17:02.885 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:02.885 "strip_size_kb": 0, 00:17:02.885 "state": "online", 00:17:02.885 "raid_level": "raid1", 00:17:02.885 "superblock": true, 00:17:02.885 "num_base_bdevs": 2, 00:17:02.885 "num_base_bdevs_discovered": 2, 00:17:02.885 "num_base_bdevs_operational": 2, 00:17:02.885 "base_bdevs_list": [ 00:17:02.885 { 00:17:02.885 "name": "pt1", 00:17:02.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.885 "is_configured": true, 00:17:02.885 "data_offset": 256, 00:17:02.885 "data_size": 7936 00:17:02.885 }, 00:17:02.885 { 00:17:02.885 "name": "pt2", 00:17:02.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.885 "is_configured": true, 00:17:02.885 "data_offset": 256, 00:17:02.885 "data_size": 7936 00:17:02.885 } 00:17:02.885 ] 00:17:02.885 } 00:17:02.885 } 00:17:02.885 }' 00:17:02.885 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.885 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:02.885 pt2' 00:17:02.885 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.885 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:02.885 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.219 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.219 "name": "pt1", 00:17:03.219 "aliases": [ 00:17:03.219 "00000000-0000-0000-0000-000000000001" 00:17:03.219 ], 00:17:03.219 "product_name": "passthru", 00:17:03.219 "block_size": 4096, 00:17:03.219 "num_blocks": 8192, 00:17:03.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.219 "assigned_rate_limits": { 00:17:03.219 "rw_ios_per_sec": 0, 00:17:03.219 "rw_mbytes_per_sec": 0, 00:17:03.219 "r_mbytes_per_sec": 0, 00:17:03.219 "w_mbytes_per_sec": 0 00:17:03.219 }, 00:17:03.219 "claimed": true, 00:17:03.219 "claim_type": "exclusive_write", 00:17:03.219 "zoned": false, 00:17:03.219 "supported_io_types": { 00:17:03.219 "read": true, 00:17:03.219 "write": true, 00:17:03.219 "unmap": true, 00:17:03.219 "write_zeroes": true, 00:17:03.219 "flush": true, 00:17:03.219 "reset": true, 00:17:03.219 "compare": false, 00:17:03.219 "compare_and_write": false, 00:17:03.219 "abort": true, 00:17:03.219 "nvme_admin": false, 00:17:03.219 "nvme_io": false 00:17:03.219 }, 00:17:03.219 "memory_domains": [ 00:17:03.219 { 00:17:03.219 "dma_device_id": "system", 00:17:03.219 "dma_device_type": 1 00:17:03.219 }, 00:17:03.219 { 00:17:03.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.219 "dma_device_type": 2 00:17:03.219 } 00:17:03.219 ], 00:17:03.220 "driver_specific": { 00:17:03.220 "passthru": { 00:17:03.220 "name": "pt1", 00:17:03.220 "base_bdev_name": "malloc1" 00:17:03.220 } 00:17:03.220 } 00:17:03.220 }' 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:03.220 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.505 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.505 "name": "pt2", 00:17:03.505 "aliases": [ 00:17:03.505 "00000000-0000-0000-0000-000000000002" 00:17:03.505 ], 00:17:03.505 "product_name": "passthru", 00:17:03.505 "block_size": 4096, 00:17:03.505 "num_blocks": 8192, 00:17:03.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.505 "assigned_rate_limits": { 00:17:03.505 "rw_ios_per_sec": 0, 00:17:03.505 "rw_mbytes_per_sec": 0, 00:17:03.505 "r_mbytes_per_sec": 0, 00:17:03.505 "w_mbytes_per_sec": 0 00:17:03.505 }, 00:17:03.505 "claimed": true, 00:17:03.505 "claim_type": "exclusive_write", 00:17:03.505 "zoned": false, 00:17:03.505 "supported_io_types": { 00:17:03.505 "read": true, 00:17:03.505 "write": true, 00:17:03.505 "unmap": true, 00:17:03.505 "write_zeroes": true, 00:17:03.505 "flush": true, 00:17:03.505 "reset": true, 00:17:03.505 "compare": false, 00:17:03.505 "compare_and_write": false, 00:17:03.505 "abort": true, 00:17:03.505 "nvme_admin": false, 00:17:03.505 "nvme_io": false 00:17:03.505 }, 00:17:03.505 "memory_domains": [ 00:17:03.505 { 00:17:03.505 "dma_device_id": "system", 00:17:03.505 "dma_device_type": 1 00:17:03.505 }, 00:17:03.505 { 00:17:03.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.505 "dma_device_type": 2 00:17:03.505 } 00:17:03.505 ], 00:17:03.505 "driver_specific": { 00:17:03.505 "passthru": { 00:17:03.505 "name": "pt2", 00:17:03.505 "base_bdev_name": "malloc2" 00:17:03.505 } 00:17:03.505 } 00:17:03.505 }' 00:17:03.505 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.505 10:21:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:03.505 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:03.763 [2024-06-10 10:21:09.296173] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.763 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2687f599-2713-11ef-b084-113036b5c18d 00:17:03.763 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 2687f599-2713-11ef-b084-113036b5c18d ']' 00:17:03.763 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:04.022 [2024-06-10 10:21:09.592201] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:04.022 [2024-06-10 10:21:09.592223] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.022 [2024-06-10 10:21:09.592255] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.022 [2024-06-10 10:21:09.592269] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.022 [2024-06-10 10:21:09.592273] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e26ef00 name raid_bdev1, state offline 00:17:04.022 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.022 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:04.589 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:04.589 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:04.589 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.589 10:21:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:04.589 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:04.589 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:04.847 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:04.847 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@649 -- # local es=0 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:05.108 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:05.368 [2024-06-10 10:21:10.820294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:05.368 [2024-06-10 10:21:10.820766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:05.368 [2024-06-10 10:21:10.820783] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:05.368 [2024-06-10 10:21:10.820820] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:05.368 [2024-06-10 10:21:10.820830] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.368 [2024-06-10 10:21:10.820834] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e26ec80 name raid_bdev1, state configuring 00:17:05.368 request: 00:17:05.368 { 00:17:05.368 "name": "raid_bdev1", 00:17:05.368 "raid_level": "raid1", 00:17:05.368 "base_bdevs": [ 00:17:05.368 "malloc1", 00:17:05.368 "malloc2" 00:17:05.368 ], 00:17:05.368 "superblock": false, 00:17:05.368 "method": "bdev_raid_create", 00:17:05.368 "req_id": 1 00:17:05.368 } 00:17:05.368 Got JSON-RPC error response 00:17:05.368 response: 00:17:05.368 { 00:17:05.368 "code": -17, 00:17:05.368 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:05.368 } 00:17:05.368 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # es=1 00:17:05.368 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:05.368 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:05.368 10:21:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:05.368 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.368 10:21:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:05.625 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:05.625 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:05.625 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.883 [2024-06-10 10:21:11.336326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.883 [2024-06-10 10:21:11.336374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.883 [2024-06-10 10:21:11.336385] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e26e780 00:17:05.883 [2024-06-10 10:21:11.336393] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.883 [2024-06-10 10:21:11.336899] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.883 [2024-06-10 10:21:11.336927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.883 [2024-06-10 10:21:11.336947] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:05.883 [2024-06-10 10:21:11.336958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.883 pt1 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.883 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.142 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.142 "name": "raid_bdev1", 00:17:06.142 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:06.142 "strip_size_kb": 0, 00:17:06.142 "state": "configuring", 00:17:06.142 "raid_level": "raid1", 00:17:06.142 "superblock": true, 00:17:06.142 "num_base_bdevs": 2, 00:17:06.142 "num_base_bdevs_discovered": 1, 00:17:06.142 "num_base_bdevs_operational": 2, 00:17:06.142 "base_bdevs_list": [ 00:17:06.142 { 00:17:06.142 "name": "pt1", 00:17:06.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.142 "is_configured": true, 00:17:06.142 "data_offset": 256, 00:17:06.142 "data_size": 7936 00:17:06.142 }, 00:17:06.142 { 00:17:06.142 "name": null, 00:17:06.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.142 "is_configured": false, 00:17:06.142 "data_offset": 256, 00:17:06.142 "data_size": 7936 00:17:06.142 } 00:17:06.142 ] 00:17:06.142 }' 00:17:06.142 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.142 10:21:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.401 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:06.401 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:06.401 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:06.401 10:21:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.662 [2024-06-10 10:21:12.188383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.662 [2024-06-10 10:21:12.188429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.662 [2024-06-10 10:21:12.188439] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e26ef00 00:17:06.662 [2024-06-10 10:21:12.188447] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.663 [2024-06-10 10:21:12.188533] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.663 [2024-06-10 10:21:12.188542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.663 [2024-06-10 10:21:12.188559] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:06.663 [2024-06-10 10:21:12.188566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.663 [2024-06-10 10:21:12.188595] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e26f180 00:17:06.663 [2024-06-10 10:21:12.188599] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.663 [2024-06-10 10:21:12.188617] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e2d1e20 00:17:06.663 [2024-06-10 10:21:12.188652] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e26f180 00:17:06.663 [2024-06-10 10:21:12.188656] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82e26f180 00:17:06.663 [2024-06-10 10:21:12.188673] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.663 pt2 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.663 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.923 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.923 "name": "raid_bdev1", 00:17:06.923 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:06.923 "strip_size_kb": 0, 00:17:06.923 "state": "online", 00:17:06.923 "raid_level": "raid1", 00:17:06.923 "superblock": true, 00:17:06.923 "num_base_bdevs": 2, 00:17:06.923 "num_base_bdevs_discovered": 2, 00:17:06.923 "num_base_bdevs_operational": 2, 00:17:06.923 "base_bdevs_list": [ 00:17:06.923 { 00:17:06.923 "name": "pt1", 00:17:06.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.923 "is_configured": true, 00:17:06.923 "data_offset": 256, 00:17:06.923 "data_size": 7936 00:17:06.923 }, 00:17:06.923 { 00:17:06.923 "name": "pt2", 00:17:06.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.923 "is_configured": true, 00:17:06.923 "data_offset": 256, 00:17:06.923 "data_size": 7936 00:17:06.923 } 00:17:06.923 ] 00:17:06.923 }' 00:17:06.923 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.923 10:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:07.489 10:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:07.489 [2024-06-10 10:21:12.988438] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.489 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:07.489 "name": "raid_bdev1", 00:17:07.489 "aliases": [ 00:17:07.489 "2687f599-2713-11ef-b084-113036b5c18d" 00:17:07.489 ], 00:17:07.489 "product_name": "Raid Volume", 00:17:07.489 "block_size": 4096, 00:17:07.489 "num_blocks": 7936, 00:17:07.489 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:07.489 "assigned_rate_limits": { 00:17:07.489 "rw_ios_per_sec": 0, 00:17:07.489 "rw_mbytes_per_sec": 0, 00:17:07.489 "r_mbytes_per_sec": 0, 00:17:07.489 "w_mbytes_per_sec": 0 00:17:07.489 }, 00:17:07.489 "claimed": false, 00:17:07.489 "zoned": false, 00:17:07.489 "supported_io_types": { 00:17:07.489 "read": true, 00:17:07.489 "write": true, 00:17:07.489 "unmap": false, 00:17:07.489 "write_zeroes": true, 00:17:07.489 "flush": false, 00:17:07.489 "reset": true, 00:17:07.489 "compare": false, 00:17:07.489 "compare_and_write": false, 00:17:07.489 "abort": false, 00:17:07.489 "nvme_admin": false, 00:17:07.489 "nvme_io": false 00:17:07.489 }, 00:17:07.489 "memory_domains": [ 00:17:07.489 { 00:17:07.489 "dma_device_id": "system", 00:17:07.489 "dma_device_type": 1 00:17:07.489 }, 00:17:07.489 { 00:17:07.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.489 "dma_device_type": 2 00:17:07.489 }, 00:17:07.489 { 00:17:07.489 "dma_device_id": "system", 00:17:07.489 "dma_device_type": 1 00:17:07.489 }, 00:17:07.489 { 00:17:07.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.489 "dma_device_type": 2 00:17:07.489 } 00:17:07.489 ], 00:17:07.489 "driver_specific": { 00:17:07.489 "raid": { 00:17:07.489 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:07.489 "strip_size_kb": 0, 00:17:07.489 "state": "online", 00:17:07.489 "raid_level": "raid1", 00:17:07.489 "superblock": true, 00:17:07.489 "num_base_bdevs": 2, 00:17:07.489 "num_base_bdevs_discovered": 2, 00:17:07.489 "num_base_bdevs_operational": 2, 00:17:07.489 "base_bdevs_list": [ 00:17:07.489 { 00:17:07.489 "name": "pt1", 00:17:07.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.489 "is_configured": true, 00:17:07.489 "data_offset": 256, 00:17:07.489 "data_size": 7936 00:17:07.489 }, 00:17:07.489 { 00:17:07.489 "name": "pt2", 00:17:07.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.489 "is_configured": true, 00:17:07.489 "data_offset": 256, 00:17:07.489 "data_size": 7936 00:17:07.489 } 00:17:07.489 ] 00:17:07.489 } 00:17:07.489 } 00:17:07.489 }' 00:17:07.489 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.489 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:07.489 pt2' 00:17:07.489 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:07.489 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:07.489 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:07.746 "name": "pt1", 00:17:07.746 "aliases": [ 00:17:07.746 "00000000-0000-0000-0000-000000000001" 00:17:07.746 ], 00:17:07.746 "product_name": "passthru", 00:17:07.746 "block_size": 4096, 00:17:07.746 "num_blocks": 8192, 00:17:07.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.746 "assigned_rate_limits": { 00:17:07.746 "rw_ios_per_sec": 0, 00:17:07.746 "rw_mbytes_per_sec": 0, 00:17:07.746 "r_mbytes_per_sec": 0, 00:17:07.746 "w_mbytes_per_sec": 0 00:17:07.746 }, 00:17:07.746 "claimed": true, 00:17:07.746 "claim_type": "exclusive_write", 00:17:07.746 "zoned": false, 00:17:07.746 "supported_io_types": { 00:17:07.746 "read": true, 00:17:07.746 "write": true, 00:17:07.746 "unmap": true, 00:17:07.746 "write_zeroes": true, 00:17:07.746 "flush": true, 00:17:07.746 "reset": true, 00:17:07.746 "compare": false, 00:17:07.746 "compare_and_write": false, 00:17:07.746 "abort": true, 00:17:07.746 "nvme_admin": false, 00:17:07.746 "nvme_io": false 00:17:07.746 }, 00:17:07.746 "memory_domains": [ 00:17:07.746 { 00:17:07.746 "dma_device_id": "system", 00:17:07.746 "dma_device_type": 1 00:17:07.746 }, 00:17:07.746 { 00:17:07.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.746 "dma_device_type": 2 00:17:07.746 } 00:17:07.746 ], 00:17:07.746 "driver_specific": { 00:17:07.746 "passthru": { 00:17:07.746 "name": "pt1", 00:17:07.746 "base_bdev_name": "malloc1" 00:17:07.746 } 00:17:07.746 } 00:17:07.746 }' 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.746 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:08.001 "name": "pt2", 00:17:08.001 "aliases": [ 00:17:08.001 "00000000-0000-0000-0000-000000000002" 00:17:08.001 ], 00:17:08.001 "product_name": "passthru", 00:17:08.001 "block_size": 4096, 00:17:08.001 "num_blocks": 8192, 00:17:08.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.001 "assigned_rate_limits": { 00:17:08.001 "rw_ios_per_sec": 0, 00:17:08.001 "rw_mbytes_per_sec": 0, 00:17:08.001 "r_mbytes_per_sec": 0, 00:17:08.001 "w_mbytes_per_sec": 0 00:17:08.001 }, 00:17:08.001 "claimed": true, 00:17:08.001 "claim_type": "exclusive_write", 00:17:08.001 "zoned": false, 00:17:08.001 "supported_io_types": { 00:17:08.001 "read": true, 00:17:08.001 "write": true, 00:17:08.001 "unmap": true, 00:17:08.001 "write_zeroes": true, 00:17:08.001 "flush": true, 00:17:08.001 "reset": true, 00:17:08.001 "compare": false, 00:17:08.001 "compare_and_write": false, 00:17:08.001 "abort": true, 00:17:08.001 "nvme_admin": false, 00:17:08.001 "nvme_io": false 00:17:08.001 }, 00:17:08.001 "memory_domains": [ 00:17:08.001 { 00:17:08.001 "dma_device_id": "system", 00:17:08.001 "dma_device_type": 1 00:17:08.001 }, 00:17:08.001 { 00:17:08.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.001 "dma_device_type": 2 00:17:08.001 } 00:17:08.001 ], 00:17:08.001 "driver_specific": { 00:17:08.001 "passthru": { 00:17:08.001 "name": "pt2", 00:17:08.001 "base_bdev_name": "malloc2" 00:17:08.001 } 00:17:08.001 } 00:17:08.001 }' 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:08.001 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:08.258 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:08.516 [2024-06-10 10:21:13.924465] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.516 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 2687f599-2713-11ef-b084-113036b5c18d '!=' 2687f599-2713-11ef-b084-113036b5c18d ']' 00:17:08.516 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:08.516 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:08.516 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:08.516 10:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:08.774 [2024-06-10 10:21:14.208457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.774 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.032 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.032 "name": "raid_bdev1", 00:17:09.032 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:09.032 "strip_size_kb": 0, 00:17:09.032 "state": "online", 00:17:09.032 "raid_level": "raid1", 00:17:09.032 "superblock": true, 00:17:09.032 "num_base_bdevs": 2, 00:17:09.032 "num_base_bdevs_discovered": 1, 00:17:09.032 "num_base_bdevs_operational": 1, 00:17:09.032 "base_bdevs_list": [ 00:17:09.032 { 00:17:09.032 "name": null, 00:17:09.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.032 "is_configured": false, 00:17:09.032 "data_offset": 256, 00:17:09.032 "data_size": 7936 00:17:09.032 }, 00:17:09.032 { 00:17:09.032 "name": "pt2", 00:17:09.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.032 "is_configured": true, 00:17:09.032 "data_offset": 256, 00:17:09.032 "data_size": 7936 00:17:09.032 } 00:17:09.032 ] 00:17:09.032 }' 00:17:09.032 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.032 10:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.291 10:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:09.551 [2024-06-10 10:21:15.072476] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.551 [2024-06-10 10:21:15.072497] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.551 [2024-06-10 10:21:15.072510] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.551 [2024-06-10 10:21:15.072535] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.551 [2024-06-10 10:21:15.072539] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e26f180 name raid_bdev1, state offline 00:17:09.551 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.551 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:09.809 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:09.809 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:09.809 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:09.809 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:09.809 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:10.067 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:10.067 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:10.067 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:10.067 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:10.067 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:17:10.067 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.325 [2024-06-10 10:21:15.820537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.325 [2024-06-10 10:21:15.820592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.325 [2024-06-10 10:21:15.820603] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e26ef00 00:17:10.325 [2024-06-10 10:21:15.820627] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.326 [2024-06-10 10:21:15.821133] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.326 [2024-06-10 10:21:15.821162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.326 [2024-06-10 10:21:15.821186] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:10.326 [2024-06-10 10:21:15.821196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.326 [2024-06-10 10:21:15.821216] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e26f180 00:17:10.326 [2024-06-10 10:21:15.821220] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:10.326 [2024-06-10 10:21:15.821239] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e2d1e20 00:17:10.326 [2024-06-10 10:21:15.821273] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e26f180 00:17:10.326 [2024-06-10 10:21:15.821277] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82e26f180 00:17:10.326 [2024-06-10 10:21:15.821295] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.326 pt2 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.326 10:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.583 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.583 "name": "raid_bdev1", 00:17:10.583 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:10.583 "strip_size_kb": 0, 00:17:10.583 "state": "online", 00:17:10.583 "raid_level": "raid1", 00:17:10.583 "superblock": true, 00:17:10.583 "num_base_bdevs": 2, 00:17:10.583 "num_base_bdevs_discovered": 1, 00:17:10.583 "num_base_bdevs_operational": 1, 00:17:10.583 "base_bdevs_list": [ 00:17:10.583 { 00:17:10.583 "name": null, 00:17:10.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.583 "is_configured": false, 00:17:10.583 "data_offset": 256, 00:17:10.583 "data_size": 7936 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": "pt2", 00:17:10.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 256, 00:17:10.583 "data_size": 7936 00:17:10.583 } 00:17:10.583 ] 00:17:10.583 }' 00:17:10.583 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.583 10:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.149 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:11.149 [2024-06-10 10:21:16.668575] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.149 [2024-06-10 10:21:16.668600] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.149 [2024-06-10 10:21:16.668620] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.149 [2024-06-10 10:21:16.668631] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.149 [2024-06-10 10:21:16.668636] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e26f180 name raid_bdev1, state offline 00:17:11.149 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:11.149 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.407 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:11.407 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:11.407 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:11.407 10:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:11.667 [2024-06-10 10:21:17.252608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:11.667 [2024-06-10 10:21:17.252663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.667 [2024-06-10 10:21:17.252691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82e26ec80 00:17:11.667 [2024-06-10 10:21:17.252699] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.667 [2024-06-10 10:21:17.253270] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.667 [2024-06-10 10:21:17.253311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:11.667 [2024-06-10 10:21:17.253347] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:11.667 [2024-06-10 10:21:17.253365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:11.667 [2024-06-10 10:21:17.253405] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:11.667 [2024-06-10 10:21:17.253413] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:11.667 [2024-06-10 10:21:17.253424] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e26e780 name raid_bdev1, state configuring 00:17:11.667 [2024-06-10 10:21:17.253439] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:11.667 [2024-06-10 10:21:17.253460] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82e26e780 00:17:11.667 [2024-06-10 10:21:17.253469] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:11.667 [2024-06-10 10:21:17.253502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82e2d1e20 00:17:11.667 [2024-06-10 10:21:17.253550] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82e26e780 00:17:11.667 [2024-06-10 10:21:17.253559] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82e26e780 00:17:11.667 [2024-06-10 10:21:17.253587] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.667 pt1 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.925 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.208 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.208 "name": "raid_bdev1", 00:17:12.208 "uuid": "2687f599-2713-11ef-b084-113036b5c18d", 00:17:12.208 "strip_size_kb": 0, 00:17:12.208 "state": "online", 00:17:12.208 "raid_level": "raid1", 00:17:12.208 "superblock": true, 00:17:12.208 "num_base_bdevs": 2, 00:17:12.208 "num_base_bdevs_discovered": 1, 00:17:12.208 "num_base_bdevs_operational": 1, 00:17:12.208 "base_bdevs_list": [ 00:17:12.208 { 00:17:12.208 "name": null, 00:17:12.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.208 "is_configured": false, 00:17:12.208 "data_offset": 256, 00:17:12.208 "data_size": 7936 00:17:12.208 }, 00:17:12.208 { 00:17:12.208 "name": "pt2", 00:17:12.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.208 "is_configured": true, 00:17:12.208 "data_offset": 256, 00:17:12.208 "data_size": 7936 00:17:12.208 } 00:17:12.208 ] 00:17:12.208 }' 00:17:12.208 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.208 10:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.473 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:12.473 10:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:12.732 10:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:12.732 10:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:12.732 10:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:12.995 [2024-06-10 10:21:18.544698] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2687f599-2713-11ef-b084-113036b5c18d '!=' 2687f599-2713-11ef-b084-113036b5c18d ']' 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 66606 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@949 -- # '[' -z 66606 ']' 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # kill -0 66606 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # uname 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # ps -c -o command 66606 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # tail -1 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:17:12.995 killing process with pid 66606 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66606' 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # kill 66606 00:17:12.995 [2024-06-10 10:21:18.577571] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.995 [2024-06-10 10:21:18.577604] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.995 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # wait 66606 00:17:12.995 [2024-06-10 10:21:18.577618] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.995 [2024-06-10 10:21:18.577623] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82e26e780 name raid_bdev1, state offline 00:17:12.995 [2024-06-10 10:21:18.587319] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.252 10:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:17:13.252 00:17:13.252 real 0m13.538s 00:17:13.252 user 0m24.325s 00:17:13.252 sys 0m2.041s 00:17:13.252 ************************************ 00:17:13.252 END TEST raid_superblock_test_4k 00:17:13.252 ************************************ 00:17:13.252 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:13.252 10:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.252 10:21:18 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:17:13.252 10:21:18 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:17:13.252 10:21:18 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:13.252 10:21:18 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:13.252 10:21:18 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:13.252 10:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.252 ************************************ 00:17:13.252 START TEST raid_state_function_test_sb_md_separate 00:17:13.252 ************************************ 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:13.252 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66997 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:13.253 Process raid pid: 66997 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66997' 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66997 /var/tmp/spdk-raid.sock 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@830 -- # '[' -z 66997 ']' 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:13.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:13.253 10:21:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.253 [2024-06-10 10:21:18.818112] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:13.253 [2024-06-10 10:21:18.818292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:13.820 EAL: TSC is not safe to use in SMP mode 00:17:13.820 EAL: TSC is not invariant 00:17:13.820 [2024-06-10 10:21:19.312755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.820 [2024-06-10 10:21:19.424033] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:13.820 [2024-06-10 10:21:19.426790] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.078 [2024-06-10 10:21:19.427771] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.078 [2024-06-10 10:21:19.427792] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.702 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:14.702 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@863 -- # return 0 00:17:14.702 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:14.961 [2024-06-10 10:21:20.336489] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.961 [2024-06-10 10:21:20.336558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.961 [2024-06-10 10:21:20.336563] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.961 [2024-06-10 10:21:20.336572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.961 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.220 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.220 "name": "Existed_Raid", 00:17:15.220 "uuid": "2e1e27dc-2713-11ef-b084-113036b5c18d", 00:17:15.220 "strip_size_kb": 0, 00:17:15.220 "state": "configuring", 00:17:15.220 "raid_level": "raid1", 00:17:15.220 "superblock": true, 00:17:15.220 "num_base_bdevs": 2, 00:17:15.220 "num_base_bdevs_discovered": 0, 00:17:15.220 "num_base_bdevs_operational": 2, 00:17:15.220 "base_bdevs_list": [ 00:17:15.220 { 00:17:15.220 "name": "BaseBdev1", 00:17:15.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.220 "is_configured": false, 00:17:15.220 "data_offset": 0, 00:17:15.220 "data_size": 0 00:17:15.220 }, 00:17:15.220 { 00:17:15.220 "name": "BaseBdev2", 00:17:15.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.220 "is_configured": false, 00:17:15.220 "data_offset": 0, 00:17:15.220 "data_size": 0 00:17:15.220 } 00:17:15.220 ] 00:17:15.220 }' 00:17:15.220 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.220 10:21:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.478 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:15.738 [2024-06-10 10:21:21.328499] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.738 [2024-06-10 10:21:21.328527] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d601500 name Existed_Raid, state configuring 00:17:15.997 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:16.255 [2024-06-10 10:21:21.672540] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:16.255 [2024-06-10 10:21:21.672594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:16.255 [2024-06-10 10:21:21.672599] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.255 [2024-06-10 10:21:21.672607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.255 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:16.515 [2024-06-10 10:21:21.917396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.515 BaseBdev1 00:17:16.515 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:16.515 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:17:16.515 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:16.516 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local i 00:17:16.516 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:16.516 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:16.516 10:21:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:16.775 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:17.102 [ 00:17:17.102 { 00:17:17.102 "name": "BaseBdev1", 00:17:17.102 "aliases": [ 00:17:17.102 "2f0f40a6-2713-11ef-b084-113036b5c18d" 00:17:17.102 ], 00:17:17.102 "product_name": "Malloc disk", 00:17:17.102 "block_size": 4096, 00:17:17.102 "num_blocks": 8192, 00:17:17.102 "uuid": "2f0f40a6-2713-11ef-b084-113036b5c18d", 00:17:17.102 "md_size": 32, 00:17:17.102 "md_interleave": false, 00:17:17.102 "dif_type": 0, 00:17:17.102 "assigned_rate_limits": { 00:17:17.102 "rw_ios_per_sec": 0, 00:17:17.102 "rw_mbytes_per_sec": 0, 00:17:17.102 "r_mbytes_per_sec": 0, 00:17:17.102 "w_mbytes_per_sec": 0 00:17:17.102 }, 00:17:17.102 "claimed": true, 00:17:17.102 "claim_type": "exclusive_write", 00:17:17.102 "zoned": false, 00:17:17.102 "supported_io_types": { 00:17:17.102 "read": true, 00:17:17.102 "write": true, 00:17:17.102 "unmap": true, 00:17:17.102 "write_zeroes": true, 00:17:17.102 "flush": true, 00:17:17.102 "reset": true, 00:17:17.102 "compare": false, 00:17:17.102 "compare_and_write": false, 00:17:17.102 "abort": true, 00:17:17.102 "nvme_admin": false, 00:17:17.102 "nvme_io": false 00:17:17.102 }, 00:17:17.102 "memory_domains": [ 00:17:17.102 { 00:17:17.102 "dma_device_id": "system", 00:17:17.102 "dma_device_type": 1 00:17:17.102 }, 00:17:17.102 { 00:17:17.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.102 "dma_device_type": 2 00:17:17.102 } 00:17:17.102 ], 00:17:17.102 "driver_specific": {} 00:17:17.102 } 00:17:17.102 ] 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # return 0 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.102 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.382 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.382 "name": "Existed_Raid", 00:17:17.382 "uuid": "2eea054d-2713-11ef-b084-113036b5c18d", 00:17:17.382 "strip_size_kb": 0, 00:17:17.382 "state": "configuring", 00:17:17.382 "raid_level": "raid1", 00:17:17.382 "superblock": true, 00:17:17.382 "num_base_bdevs": 2, 00:17:17.382 "num_base_bdevs_discovered": 1, 00:17:17.382 "num_base_bdevs_operational": 2, 00:17:17.382 "base_bdevs_list": [ 00:17:17.382 { 00:17:17.382 "name": "BaseBdev1", 00:17:17.382 "uuid": "2f0f40a6-2713-11ef-b084-113036b5c18d", 00:17:17.382 "is_configured": true, 00:17:17.382 "data_offset": 256, 00:17:17.382 "data_size": 7936 00:17:17.382 }, 00:17:17.382 { 00:17:17.382 "name": "BaseBdev2", 00:17:17.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.382 "is_configured": false, 00:17:17.382 "data_offset": 0, 00:17:17.382 "data_size": 0 00:17:17.382 } 00:17:17.382 ] 00:17:17.382 }' 00:17:17.382 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.382 10:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:17.639 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:17.897 [2024-06-10 10:21:23.440600] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:17.897 [2024-06-10 10:21:23.440643] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d601500 name Existed_Raid, state configuring 00:17:17.897 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:18.462 [2024-06-10 10:21:23.820637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.462 [2024-06-10 10:21:23.821354] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.462 [2024-06-10 10:21:23.821398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.462 10:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.721 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.721 "name": "Existed_Raid", 00:17:18.721 "uuid": "3031cb32-2713-11ef-b084-113036b5c18d", 00:17:18.721 "strip_size_kb": 0, 00:17:18.721 "state": "configuring", 00:17:18.721 "raid_level": "raid1", 00:17:18.721 "superblock": true, 00:17:18.721 "num_base_bdevs": 2, 00:17:18.721 "num_base_bdevs_discovered": 1, 00:17:18.721 "num_base_bdevs_operational": 2, 00:17:18.721 "base_bdevs_list": [ 00:17:18.721 { 00:17:18.721 "name": "BaseBdev1", 00:17:18.721 "uuid": "2f0f40a6-2713-11ef-b084-113036b5c18d", 00:17:18.721 "is_configured": true, 00:17:18.721 "data_offset": 256, 00:17:18.721 "data_size": 7936 00:17:18.721 }, 00:17:18.721 { 00:17:18.721 "name": "BaseBdev2", 00:17:18.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.721 "is_configured": false, 00:17:18.721 "data_offset": 0, 00:17:18.721 "data_size": 0 00:17:18.721 } 00:17:18.721 ] 00:17:18.721 }' 00:17:18.721 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.721 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.980 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:19.238 [2024-06-10 10:21:24.600730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:19.238 [2024-06-10 10:21:24.600786] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d601a00 00:17:19.238 [2024-06-10 10:21:24.600791] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.238 [2024-06-10 10:21:24.600811] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d664e20 00:17:19.238 [2024-06-10 10:21:24.600839] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d601a00 00:17:19.238 [2024-06-10 10:21:24.600842] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d601a00 00:17:19.238 [2024-06-10 10:21:24.600855] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.238 BaseBdev2 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local i 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.238 10:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:19.547 [ 00:17:19.547 { 00:17:19.547 "name": "BaseBdev2", 00:17:19.547 "aliases": [ 00:17:19.547 "30a8d141-2713-11ef-b084-113036b5c18d" 00:17:19.547 ], 00:17:19.547 "product_name": "Malloc disk", 00:17:19.547 "block_size": 4096, 00:17:19.547 "num_blocks": 8192, 00:17:19.547 "uuid": "30a8d141-2713-11ef-b084-113036b5c18d", 00:17:19.547 "md_size": 32, 00:17:19.547 "md_interleave": false, 00:17:19.547 "dif_type": 0, 00:17:19.547 "assigned_rate_limits": { 00:17:19.547 "rw_ios_per_sec": 0, 00:17:19.547 "rw_mbytes_per_sec": 0, 00:17:19.547 "r_mbytes_per_sec": 0, 00:17:19.547 "w_mbytes_per_sec": 0 00:17:19.547 }, 00:17:19.547 "claimed": true, 00:17:19.547 "claim_type": "exclusive_write", 00:17:19.547 "zoned": false, 00:17:19.547 "supported_io_types": { 00:17:19.547 "read": true, 00:17:19.547 "write": true, 00:17:19.547 "unmap": true, 00:17:19.547 "write_zeroes": true, 00:17:19.547 "flush": true, 00:17:19.547 "reset": true, 00:17:19.547 "compare": false, 00:17:19.547 "compare_and_write": false, 00:17:19.547 "abort": true, 00:17:19.547 "nvme_admin": false, 00:17:19.547 "nvme_io": false 00:17:19.547 }, 00:17:19.547 "memory_domains": [ 00:17:19.547 { 00:17:19.547 "dma_device_id": "system", 00:17:19.547 "dma_device_type": 1 00:17:19.547 }, 00:17:19.547 { 00:17:19.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.547 "dma_device_type": 2 00:17:19.547 } 00:17:19.547 ], 00:17:19.547 "driver_specific": {} 00:17:19.547 } 00:17:19.547 ] 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # return 0 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.547 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.825 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.825 "name": "Existed_Raid", 00:17:19.825 "uuid": "3031cb32-2713-11ef-b084-113036b5c18d", 00:17:19.825 "strip_size_kb": 0, 00:17:19.825 "state": "online", 00:17:19.825 "raid_level": "raid1", 00:17:19.825 "superblock": true, 00:17:19.825 "num_base_bdevs": 2, 00:17:19.825 "num_base_bdevs_discovered": 2, 00:17:19.825 "num_base_bdevs_operational": 2, 00:17:19.825 "base_bdevs_list": [ 00:17:19.825 { 00:17:19.825 "name": "BaseBdev1", 00:17:19.825 "uuid": "2f0f40a6-2713-11ef-b084-113036b5c18d", 00:17:19.825 "is_configured": true, 00:17:19.825 "data_offset": 256, 00:17:19.825 "data_size": 7936 00:17:19.825 }, 00:17:19.825 { 00:17:19.825 "name": "BaseBdev2", 00:17:19.825 "uuid": "30a8d141-2713-11ef-b084-113036b5c18d", 00:17:19.825 "is_configured": true, 00:17:19.825 "data_offset": 256, 00:17:19.825 "data_size": 7936 00:17:19.825 } 00:17:19.825 ] 00:17:19.825 }' 00:17:19.825 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.825 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:20.089 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:20.348 [2024-06-10 10:21:25.836741] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.348 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:20.348 "name": "Existed_Raid", 00:17:20.348 "aliases": [ 00:17:20.348 "3031cb32-2713-11ef-b084-113036b5c18d" 00:17:20.348 ], 00:17:20.348 "product_name": "Raid Volume", 00:17:20.348 "block_size": 4096, 00:17:20.348 "num_blocks": 7936, 00:17:20.348 "uuid": "3031cb32-2713-11ef-b084-113036b5c18d", 00:17:20.348 "md_size": 32, 00:17:20.348 "md_interleave": false, 00:17:20.348 "dif_type": 0, 00:17:20.348 "assigned_rate_limits": { 00:17:20.348 "rw_ios_per_sec": 0, 00:17:20.348 "rw_mbytes_per_sec": 0, 00:17:20.348 "r_mbytes_per_sec": 0, 00:17:20.348 "w_mbytes_per_sec": 0 00:17:20.348 }, 00:17:20.348 "claimed": false, 00:17:20.348 "zoned": false, 00:17:20.348 "supported_io_types": { 00:17:20.348 "read": true, 00:17:20.348 "write": true, 00:17:20.348 "unmap": false, 00:17:20.348 "write_zeroes": true, 00:17:20.348 "flush": false, 00:17:20.348 "reset": true, 00:17:20.348 "compare": false, 00:17:20.348 "compare_and_write": false, 00:17:20.348 "abort": false, 00:17:20.348 "nvme_admin": false, 00:17:20.348 "nvme_io": false 00:17:20.348 }, 00:17:20.348 "memory_domains": [ 00:17:20.348 { 00:17:20.348 "dma_device_id": "system", 00:17:20.348 "dma_device_type": 1 00:17:20.348 }, 00:17:20.348 { 00:17:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.348 "dma_device_type": 2 00:17:20.348 }, 00:17:20.348 { 00:17:20.348 "dma_device_id": "system", 00:17:20.348 "dma_device_type": 1 00:17:20.348 }, 00:17:20.348 { 00:17:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.348 "dma_device_type": 2 00:17:20.348 } 00:17:20.348 ], 00:17:20.348 "driver_specific": { 00:17:20.348 "raid": { 00:17:20.348 "uuid": "3031cb32-2713-11ef-b084-113036b5c18d", 00:17:20.348 "strip_size_kb": 0, 00:17:20.348 "state": "online", 00:17:20.348 "raid_level": "raid1", 00:17:20.348 "superblock": true, 00:17:20.348 "num_base_bdevs": 2, 00:17:20.348 "num_base_bdevs_discovered": 2, 00:17:20.348 "num_base_bdevs_operational": 2, 00:17:20.348 "base_bdevs_list": [ 00:17:20.348 { 00:17:20.348 "name": "BaseBdev1", 00:17:20.348 "uuid": "2f0f40a6-2713-11ef-b084-113036b5c18d", 00:17:20.348 "is_configured": true, 00:17:20.348 "data_offset": 256, 00:17:20.348 "data_size": 7936 00:17:20.348 }, 00:17:20.348 { 00:17:20.348 "name": "BaseBdev2", 00:17:20.348 "uuid": "30a8d141-2713-11ef-b084-113036b5c18d", 00:17:20.348 "is_configured": true, 00:17:20.348 "data_offset": 256, 00:17:20.348 "data_size": 7936 00:17:20.348 } 00:17:20.348 ] 00:17:20.348 } 00:17:20.348 } 00:17:20.348 }' 00:17:20.348 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.348 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:20.348 BaseBdev2' 00:17:20.348 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.348 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.348 10:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:20.915 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.915 "name": "BaseBdev1", 00:17:20.915 "aliases": [ 00:17:20.915 "2f0f40a6-2713-11ef-b084-113036b5c18d" 00:17:20.915 ], 00:17:20.915 "product_name": "Malloc disk", 00:17:20.915 "block_size": 4096, 00:17:20.916 "num_blocks": 8192, 00:17:20.916 "uuid": "2f0f40a6-2713-11ef-b084-113036b5c18d", 00:17:20.916 "md_size": 32, 00:17:20.916 "md_interleave": false, 00:17:20.916 "dif_type": 0, 00:17:20.916 "assigned_rate_limits": { 00:17:20.916 "rw_ios_per_sec": 0, 00:17:20.916 "rw_mbytes_per_sec": 0, 00:17:20.916 "r_mbytes_per_sec": 0, 00:17:20.916 "w_mbytes_per_sec": 0 00:17:20.916 }, 00:17:20.916 "claimed": true, 00:17:20.916 "claim_type": "exclusive_write", 00:17:20.916 "zoned": false, 00:17:20.916 "supported_io_types": { 00:17:20.916 "read": true, 00:17:20.916 "write": true, 00:17:20.916 "unmap": true, 00:17:20.916 "write_zeroes": true, 00:17:20.916 "flush": true, 00:17:20.916 "reset": true, 00:17:20.916 "compare": false, 00:17:20.916 "compare_and_write": false, 00:17:20.916 "abort": true, 00:17:20.916 "nvme_admin": false, 00:17:20.916 "nvme_io": false 00:17:20.916 }, 00:17:20.916 "memory_domains": [ 00:17:20.916 { 00:17:20.916 "dma_device_id": "system", 00:17:20.916 "dma_device_type": 1 00:17:20.916 }, 00:17:20.916 { 00:17:20.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.916 "dma_device_type": 2 00:17:20.916 } 00:17:20.916 ], 00:17:20.916 "driver_specific": {} 00:17:20.916 }' 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.916 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.175 "name": "BaseBdev2", 00:17:21.175 "aliases": [ 00:17:21.175 "30a8d141-2713-11ef-b084-113036b5c18d" 00:17:21.175 ], 00:17:21.175 "product_name": "Malloc disk", 00:17:21.175 "block_size": 4096, 00:17:21.175 "num_blocks": 8192, 00:17:21.175 "uuid": "30a8d141-2713-11ef-b084-113036b5c18d", 00:17:21.175 "md_size": 32, 00:17:21.175 "md_interleave": false, 00:17:21.175 "dif_type": 0, 00:17:21.175 "assigned_rate_limits": { 00:17:21.175 "rw_ios_per_sec": 0, 00:17:21.175 "rw_mbytes_per_sec": 0, 00:17:21.175 "r_mbytes_per_sec": 0, 00:17:21.175 "w_mbytes_per_sec": 0 00:17:21.175 }, 00:17:21.175 "claimed": true, 00:17:21.175 "claim_type": "exclusive_write", 00:17:21.175 "zoned": false, 00:17:21.175 "supported_io_types": { 00:17:21.175 "read": true, 00:17:21.175 "write": true, 00:17:21.175 "unmap": true, 00:17:21.175 "write_zeroes": true, 00:17:21.175 "flush": true, 00:17:21.175 "reset": true, 00:17:21.175 "compare": false, 00:17:21.175 "compare_and_write": false, 00:17:21.175 "abort": true, 00:17:21.175 "nvme_admin": false, 00:17:21.175 "nvme_io": false 00:17:21.175 }, 00:17:21.175 "memory_domains": [ 00:17:21.175 { 00:17:21.175 "dma_device_id": "system", 00:17:21.175 "dma_device_type": 1 00:17:21.175 }, 00:17:21.175 { 00:17:21.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.175 "dma_device_type": 2 00:17:21.175 } 00:17:21.175 ], 00:17:21.175 "driver_specific": {} 00:17:21.175 }' 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:21.175 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.434 [2024-06-10 10:21:26.888748] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.434 10:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.693 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.693 "name": "Existed_Raid", 00:17:21.693 "uuid": "3031cb32-2713-11ef-b084-113036b5c18d", 00:17:21.693 "strip_size_kb": 0, 00:17:21.693 "state": "online", 00:17:21.693 "raid_level": "raid1", 00:17:21.693 "superblock": true, 00:17:21.693 "num_base_bdevs": 2, 00:17:21.693 "num_base_bdevs_discovered": 1, 00:17:21.693 "num_base_bdevs_operational": 1, 00:17:21.693 "base_bdevs_list": [ 00:17:21.693 { 00:17:21.693 "name": null, 00:17:21.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.693 "is_configured": false, 00:17:21.693 "data_offset": 256, 00:17:21.693 "data_size": 7936 00:17:21.693 }, 00:17:21.693 { 00:17:21.693 "name": "BaseBdev2", 00:17:21.693 "uuid": "30a8d141-2713-11ef-b084-113036b5c18d", 00:17:21.693 "is_configured": true, 00:17:21.693 "data_offset": 256, 00:17:21.693 "data_size": 7936 00:17:21.693 } 00:17:21.693 ] 00:17:21.693 }' 00:17:21.693 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.693 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.977 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:21.977 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:21.977 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:21.977 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.269 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:22.269 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:22.269 10:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:22.533 [2024-06-10 10:21:28.017561] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:22.534 [2024-06-10 10:21:28.017601] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.534 [2024-06-10 10:21:28.022438] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.534 [2024-06-10 10:21:28.022452] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.534 [2024-06-10 10:21:28.022457] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d601a00 name Existed_Raid, state offline 00:17:22.534 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:22.534 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:22.534 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:22.534 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66997 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@949 -- # '[' -z 66997 ']' 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # kill -0 66997 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # uname 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # ps -c -o command 66997 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # tail -1 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:17:22.794 killing process with pid 66997 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 66997' 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # kill 66997 00:17:22.794 [2024-06-10 10:21:28.268252] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.794 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # wait 66997 00:17:22.794 [2024-06-10 10:21:28.268294] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.054 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:17:23.054 00:17:23.054 real 0m9.633s 00:17:23.054 user 0m16.979s 00:17:23.054 sys 0m1.545s 00:17:23.054 ************************************ 00:17:23.054 END TEST raid_state_function_test_sb_md_separate 00:17:23.054 ************************************ 00:17:23.054 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:23.054 10:21:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.054 10:21:28 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:23.054 10:21:28 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:17:23.054 10:21:28 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:23.054 10:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.054 ************************************ 00:17:23.054 START TEST raid_superblock_test_md_separate 00:17:23.054 ************************************ 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=67271 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 67271 /var/tmp/spdk-raid.sock 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@830 -- # '[' -z 67271 ']' 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:23.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.054 10:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:23.054 [2024-06-10 10:21:28.494132] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:23.054 [2024-06-10 10:21:28.494322] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:23.622 EAL: TSC is not safe to use in SMP mode 00:17:23.622 EAL: TSC is not invariant 00:17:23.622 [2024-06-10 10:21:28.972719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.622 [2024-06-10 10:21:29.053257] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:23.622 [2024-06-10 10:21:29.055389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.622 [2024-06-10 10:21:29.056133] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.622 [2024-06-10 10:21:29.056145] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@863 -- # return 0 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:23.880 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:23.881 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.881 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.881 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.881 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:24.138 malloc1 00:17:24.138 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.404 [2024-06-10 10:21:29.862625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.404 [2024-06-10 10:21:29.862682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.404 [2024-06-10 10:21:29.862694] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b282780 00:17:24.404 [2024-06-10 10:21:29.862702] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.404 [2024-06-10 10:21:29.863419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.404 [2024-06-10 10:21:29.863447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.404 pt1 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.404 10:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:24.663 malloc2 00:17:24.663 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.922 [2024-06-10 10:21:30.374659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.922 [2024-06-10 10:21:30.374714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.922 [2024-06-10 10:21:30.374725] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b282c80 00:17:24.922 [2024-06-10 10:21:30.374732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.922 [2024-06-10 10:21:30.375228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.922 [2024-06-10 10:21:30.375255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.922 pt2 00:17:24.922 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:24.922 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:24.922 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:25.181 [2024-06-10 10:21:30.630669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:25.181 [2024-06-10 10:21:30.631129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.181 [2024-06-10 10:21:30.631189] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b282f00 00:17:25.181 [2024-06-10 10:21:30.631193] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.181 [2024-06-10 10:21:30.631226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b2e5e20 00:17:25.181 [2024-06-10 10:21:30.631250] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b282f00 00:17:25.181 [2024-06-10 10:21:30.631253] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b282f00 00:17:25.181 [2024-06-10 10:21:30.631266] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.181 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.440 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.440 "name": "raid_bdev1", 00:17:25.440 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:25.440 "strip_size_kb": 0, 00:17:25.440 "state": "online", 00:17:25.440 "raid_level": "raid1", 00:17:25.440 "superblock": true, 00:17:25.440 "num_base_bdevs": 2, 00:17:25.440 "num_base_bdevs_discovered": 2, 00:17:25.440 "num_base_bdevs_operational": 2, 00:17:25.440 "base_bdevs_list": [ 00:17:25.440 { 00:17:25.440 "name": "pt1", 00:17:25.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.440 "is_configured": true, 00:17:25.440 "data_offset": 256, 00:17:25.440 "data_size": 7936 00:17:25.440 }, 00:17:25.440 { 00:17:25.440 "name": "pt2", 00:17:25.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.440 "is_configured": true, 00:17:25.440 "data_offset": 256, 00:17:25.440 "data_size": 7936 00:17:25.440 } 00:17:25.440 ] 00:17:25.440 }' 00:17:25.440 10:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.440 10:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:25.698 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:25.957 [2024-06-10 10:21:31.422735] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.957 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:25.957 "name": "raid_bdev1", 00:17:25.957 "aliases": [ 00:17:25.957 "3440ec16-2713-11ef-b084-113036b5c18d" 00:17:25.957 ], 00:17:25.957 "product_name": "Raid Volume", 00:17:25.957 "block_size": 4096, 00:17:25.957 "num_blocks": 7936, 00:17:25.957 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:25.957 "md_size": 32, 00:17:25.957 "md_interleave": false, 00:17:25.957 "dif_type": 0, 00:17:25.957 "assigned_rate_limits": { 00:17:25.957 "rw_ios_per_sec": 0, 00:17:25.957 "rw_mbytes_per_sec": 0, 00:17:25.957 "r_mbytes_per_sec": 0, 00:17:25.957 "w_mbytes_per_sec": 0 00:17:25.957 }, 00:17:25.957 "claimed": false, 00:17:25.957 "zoned": false, 00:17:25.957 "supported_io_types": { 00:17:25.957 "read": true, 00:17:25.957 "write": true, 00:17:25.957 "unmap": false, 00:17:25.957 "write_zeroes": true, 00:17:25.957 "flush": false, 00:17:25.957 "reset": true, 00:17:25.957 "compare": false, 00:17:25.957 "compare_and_write": false, 00:17:25.957 "abort": false, 00:17:25.957 "nvme_admin": false, 00:17:25.957 "nvme_io": false 00:17:25.957 }, 00:17:25.957 "memory_domains": [ 00:17:25.957 { 00:17:25.957 "dma_device_id": "system", 00:17:25.957 "dma_device_type": 1 00:17:25.957 }, 00:17:25.957 { 00:17:25.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.957 "dma_device_type": 2 00:17:25.957 }, 00:17:25.957 { 00:17:25.957 "dma_device_id": "system", 00:17:25.957 "dma_device_type": 1 00:17:25.957 }, 00:17:25.957 { 00:17:25.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.957 "dma_device_type": 2 00:17:25.957 } 00:17:25.957 ], 00:17:25.957 "driver_specific": { 00:17:25.957 "raid": { 00:17:25.957 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:25.957 "strip_size_kb": 0, 00:17:25.957 "state": "online", 00:17:25.957 "raid_level": "raid1", 00:17:25.957 "superblock": true, 00:17:25.957 "num_base_bdevs": 2, 00:17:25.957 "num_base_bdevs_discovered": 2, 00:17:25.957 "num_base_bdevs_operational": 2, 00:17:25.957 "base_bdevs_list": [ 00:17:25.957 { 00:17:25.957 "name": "pt1", 00:17:25.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.957 "is_configured": true, 00:17:25.957 "data_offset": 256, 00:17:25.957 "data_size": 7936 00:17:25.958 }, 00:17:25.958 { 00:17:25.958 "name": "pt2", 00:17:25.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.958 "is_configured": true, 00:17:25.958 "data_offset": 256, 00:17:25.958 "data_size": 7936 00:17:25.958 } 00:17:25.958 ] 00:17:25.958 } 00:17:25.958 } 00:17:25.958 }' 00:17:25.958 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.958 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:25.958 pt2' 00:17:25.958 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:25.958 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:25.958 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.217 "name": "pt1", 00:17:26.217 "aliases": [ 00:17:26.217 "00000000-0000-0000-0000-000000000001" 00:17:26.217 ], 00:17:26.217 "product_name": "passthru", 00:17:26.217 "block_size": 4096, 00:17:26.217 "num_blocks": 8192, 00:17:26.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:26.217 "md_size": 32, 00:17:26.217 "md_interleave": false, 00:17:26.217 "dif_type": 0, 00:17:26.217 "assigned_rate_limits": { 00:17:26.217 "rw_ios_per_sec": 0, 00:17:26.217 "rw_mbytes_per_sec": 0, 00:17:26.217 "r_mbytes_per_sec": 0, 00:17:26.217 "w_mbytes_per_sec": 0 00:17:26.217 }, 00:17:26.217 "claimed": true, 00:17:26.217 "claim_type": "exclusive_write", 00:17:26.217 "zoned": false, 00:17:26.217 "supported_io_types": { 00:17:26.217 "read": true, 00:17:26.217 "write": true, 00:17:26.217 "unmap": true, 00:17:26.217 "write_zeroes": true, 00:17:26.217 "flush": true, 00:17:26.217 "reset": true, 00:17:26.217 "compare": false, 00:17:26.217 "compare_and_write": false, 00:17:26.217 "abort": true, 00:17:26.217 "nvme_admin": false, 00:17:26.217 "nvme_io": false 00:17:26.217 }, 00:17:26.217 "memory_domains": [ 00:17:26.217 { 00:17:26.217 "dma_device_id": "system", 00:17:26.217 "dma_device_type": 1 00:17:26.217 }, 00:17:26.217 { 00:17:26.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.217 "dma_device_type": 2 00:17:26.217 } 00:17:26.217 ], 00:17:26.217 "driver_specific": { 00:17:26.217 "passthru": { 00:17:26.217 "name": "pt1", 00:17:26.217 "base_bdev_name": "malloc1" 00:17:26.217 } 00:17:26.217 } 00:17:26.217 }' 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:26.217 10:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.477 "name": "pt2", 00:17:26.477 "aliases": [ 00:17:26.477 "00000000-0000-0000-0000-000000000002" 00:17:26.477 ], 00:17:26.477 "product_name": "passthru", 00:17:26.477 "block_size": 4096, 00:17:26.477 "num_blocks": 8192, 00:17:26.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.477 "md_size": 32, 00:17:26.477 "md_interleave": false, 00:17:26.477 "dif_type": 0, 00:17:26.477 "assigned_rate_limits": { 00:17:26.477 "rw_ios_per_sec": 0, 00:17:26.477 "rw_mbytes_per_sec": 0, 00:17:26.477 "r_mbytes_per_sec": 0, 00:17:26.477 "w_mbytes_per_sec": 0 00:17:26.477 }, 00:17:26.477 "claimed": true, 00:17:26.477 "claim_type": "exclusive_write", 00:17:26.477 "zoned": false, 00:17:26.477 "supported_io_types": { 00:17:26.477 "read": true, 00:17:26.477 "write": true, 00:17:26.477 "unmap": true, 00:17:26.477 "write_zeroes": true, 00:17:26.477 "flush": true, 00:17:26.477 "reset": true, 00:17:26.477 "compare": false, 00:17:26.477 "compare_and_write": false, 00:17:26.477 "abort": true, 00:17:26.477 "nvme_admin": false, 00:17:26.477 "nvme_io": false 00:17:26.477 }, 00:17:26.477 "memory_domains": [ 00:17:26.477 { 00:17:26.477 "dma_device_id": "system", 00:17:26.477 "dma_device_type": 1 00:17:26.477 }, 00:17:26.477 { 00:17:26.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.477 "dma_device_type": 2 00:17:26.477 } 00:17:26.477 ], 00:17:26.477 "driver_specific": { 00:17:26.477 "passthru": { 00:17:26.477 "name": "pt2", 00:17:26.477 "base_bdev_name": "malloc2" 00:17:26.477 } 00:17:26.477 } 00:17:26.477 }' 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:26.477 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:26.745 [2024-06-10 10:21:32.294766] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.745 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3440ec16-2713-11ef-b084-113036b5c18d 00:17:26.745 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 3440ec16-2713-11ef-b084-113036b5c18d ']' 00:17:26.745 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:27.056 [2024-06-10 10:21:32.526750] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.056 [2024-06-10 10:21:32.526774] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.056 [2024-06-10 10:21:32.526793] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.056 [2024-06-10 10:21:32.526807] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.056 [2024-06-10 10:21:32.526811] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b282f00 name raid_bdev1, state offline 00:17:27.056 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.056 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:27.349 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:27.349 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:27.349 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:27.350 10:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:27.610 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:27.610 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:27.868 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:27.868 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@649 -- # local es=0 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:28.126 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:28.384 [2024-06-10 10:21:33.778825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:28.384 [2024-06-10 10:21:33.779289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:28.384 [2024-06-10 10:21:33.779306] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:28.384 [2024-06-10 10:21:33.779346] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:28.384 [2024-06-10 10:21:33.779356] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.384 [2024-06-10 10:21:33.779361] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b282c80 name raid_bdev1, state configuring 00:17:28.384 request: 00:17:28.384 { 00:17:28.384 "name": "raid_bdev1", 00:17:28.384 "raid_level": "raid1", 00:17:28.384 "base_bdevs": [ 00:17:28.384 "malloc1", 00:17:28.384 "malloc2" 00:17:28.384 ], 00:17:28.384 "superblock": false, 00:17:28.384 "method": "bdev_raid_create", 00:17:28.384 "req_id": 1 00:17:28.384 } 00:17:28.384 Got JSON-RPC error response 00:17:28.384 response: 00:17:28.384 { 00:17:28.384 "code": -17, 00:17:28.384 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:28.384 } 00:17:28.384 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # es=1 00:17:28.385 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:28.385 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:28.385 10:21:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:28.385 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.385 10:21:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:28.644 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:28.644 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:28.644 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:28.904 [2024-06-10 10:21:34.326831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:28.904 [2024-06-10 10:21:34.326886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.904 [2024-06-10 10:21:34.326898] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b282780 00:17:28.904 [2024-06-10 10:21:34.326906] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.904 [2024-06-10 10:21:34.327377] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.904 [2024-06-10 10:21:34.327403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:28.904 [2024-06-10 10:21:34.327425] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:28.904 [2024-06-10 10:21:34.327435] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:28.904 pt1 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.904 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.162 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.162 "name": "raid_bdev1", 00:17:29.162 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:29.162 "strip_size_kb": 0, 00:17:29.162 "state": "configuring", 00:17:29.162 "raid_level": "raid1", 00:17:29.162 "superblock": true, 00:17:29.163 "num_base_bdevs": 2, 00:17:29.163 "num_base_bdevs_discovered": 1, 00:17:29.163 "num_base_bdevs_operational": 2, 00:17:29.163 "base_bdevs_list": [ 00:17:29.163 { 00:17:29.163 "name": "pt1", 00:17:29.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.163 "is_configured": true, 00:17:29.163 "data_offset": 256, 00:17:29.163 "data_size": 7936 00:17:29.163 }, 00:17:29.163 { 00:17:29.163 "name": null, 00:17:29.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.163 "is_configured": false, 00:17:29.163 "data_offset": 256, 00:17:29.163 "data_size": 7936 00:17:29.163 } 00:17:29.163 ] 00:17:29.163 }' 00:17:29.163 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.163 10:21:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.421 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:29.421 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:29.421 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:29.421 10:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.701 [2024-06-10 10:21:35.126845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.701 [2024-06-10 10:21:35.126899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.701 [2024-06-10 10:21:35.126926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b282f00 00:17:29.701 [2024-06-10 10:21:35.126934] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.701 [2024-06-10 10:21:35.126992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.701 [2024-06-10 10:21:35.127000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.701 [2024-06-10 10:21:35.127018] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:29.701 [2024-06-10 10:21:35.127025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.701 [2024-06-10 10:21:35.127046] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b283180 00:17:29.701 [2024-06-10 10:21:35.127049] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.701 [2024-06-10 10:21:35.127083] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b2e5e20 00:17:29.701 [2024-06-10 10:21:35.127102] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b283180 00:17:29.701 [2024-06-10 10:21:35.127106] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b283180 00:17:29.701 [2024-06-10 10:21:35.127118] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.701 pt2 00:17:29.701 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:29.701 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.702 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.960 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.960 "name": "raid_bdev1", 00:17:29.960 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:29.960 "strip_size_kb": 0, 00:17:29.960 "state": "online", 00:17:29.960 "raid_level": "raid1", 00:17:29.960 "superblock": true, 00:17:29.960 "num_base_bdevs": 2, 00:17:29.960 "num_base_bdevs_discovered": 2, 00:17:29.960 "num_base_bdevs_operational": 2, 00:17:29.960 "base_bdevs_list": [ 00:17:29.960 { 00:17:29.960 "name": "pt1", 00:17:29.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.960 "is_configured": true, 00:17:29.960 "data_offset": 256, 00:17:29.960 "data_size": 7936 00:17:29.960 }, 00:17:29.960 { 00:17:29.960 "name": "pt2", 00:17:29.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.960 "is_configured": true, 00:17:29.960 "data_offset": 256, 00:17:29.960 "data_size": 7936 00:17:29.960 } 00:17:29.960 ] 00:17:29.960 }' 00:17:29.960 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.960 10:21:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.217 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:30.218 10:21:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:30.476 [2024-06-10 10:21:35.998908] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.476 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:30.476 "name": "raid_bdev1", 00:17:30.476 "aliases": [ 00:17:30.476 "3440ec16-2713-11ef-b084-113036b5c18d" 00:17:30.476 ], 00:17:30.476 "product_name": "Raid Volume", 00:17:30.476 "block_size": 4096, 00:17:30.476 "num_blocks": 7936, 00:17:30.477 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:30.477 "md_size": 32, 00:17:30.477 "md_interleave": false, 00:17:30.477 "dif_type": 0, 00:17:30.477 "assigned_rate_limits": { 00:17:30.477 "rw_ios_per_sec": 0, 00:17:30.477 "rw_mbytes_per_sec": 0, 00:17:30.477 "r_mbytes_per_sec": 0, 00:17:30.477 "w_mbytes_per_sec": 0 00:17:30.477 }, 00:17:30.477 "claimed": false, 00:17:30.477 "zoned": false, 00:17:30.477 "supported_io_types": { 00:17:30.477 "read": true, 00:17:30.477 "write": true, 00:17:30.477 "unmap": false, 00:17:30.477 "write_zeroes": true, 00:17:30.477 "flush": false, 00:17:30.477 "reset": true, 00:17:30.477 "compare": false, 00:17:30.477 "compare_and_write": false, 00:17:30.477 "abort": false, 00:17:30.477 "nvme_admin": false, 00:17:30.477 "nvme_io": false 00:17:30.477 }, 00:17:30.477 "memory_domains": [ 00:17:30.477 { 00:17:30.477 "dma_device_id": "system", 00:17:30.477 "dma_device_type": 1 00:17:30.477 }, 00:17:30.477 { 00:17:30.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.477 "dma_device_type": 2 00:17:30.477 }, 00:17:30.477 { 00:17:30.477 "dma_device_id": "system", 00:17:30.477 "dma_device_type": 1 00:17:30.477 }, 00:17:30.477 { 00:17:30.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.477 "dma_device_type": 2 00:17:30.477 } 00:17:30.477 ], 00:17:30.477 "driver_specific": { 00:17:30.477 "raid": { 00:17:30.477 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:30.477 "strip_size_kb": 0, 00:17:30.477 "state": "online", 00:17:30.477 "raid_level": "raid1", 00:17:30.477 "superblock": true, 00:17:30.477 "num_base_bdevs": 2, 00:17:30.477 "num_base_bdevs_discovered": 2, 00:17:30.477 "num_base_bdevs_operational": 2, 00:17:30.477 "base_bdevs_list": [ 00:17:30.477 { 00:17:30.477 "name": "pt1", 00:17:30.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.477 "is_configured": true, 00:17:30.477 "data_offset": 256, 00:17:30.477 "data_size": 7936 00:17:30.477 }, 00:17:30.477 { 00:17:30.477 "name": "pt2", 00:17:30.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.477 "is_configured": true, 00:17:30.477 "data_offset": 256, 00:17:30.477 "data_size": 7936 00:17:30.477 } 00:17:30.477 ] 00:17:30.477 } 00:17:30.477 } 00:17:30.477 }' 00:17:30.477 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.477 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:30.477 pt2' 00:17:30.477 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.477 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:30.477 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:30.735 "name": "pt1", 00:17:30.735 "aliases": [ 00:17:30.735 "00000000-0000-0000-0000-000000000001" 00:17:30.735 ], 00:17:30.735 "product_name": "passthru", 00:17:30.735 "block_size": 4096, 00:17:30.735 "num_blocks": 8192, 00:17:30.735 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.735 "md_size": 32, 00:17:30.735 "md_interleave": false, 00:17:30.735 "dif_type": 0, 00:17:30.735 "assigned_rate_limits": { 00:17:30.735 "rw_ios_per_sec": 0, 00:17:30.735 "rw_mbytes_per_sec": 0, 00:17:30.735 "r_mbytes_per_sec": 0, 00:17:30.735 "w_mbytes_per_sec": 0 00:17:30.735 }, 00:17:30.735 "claimed": true, 00:17:30.735 "claim_type": "exclusive_write", 00:17:30.735 "zoned": false, 00:17:30.735 "supported_io_types": { 00:17:30.735 "read": true, 00:17:30.735 "write": true, 00:17:30.735 "unmap": true, 00:17:30.735 "write_zeroes": true, 00:17:30.735 "flush": true, 00:17:30.735 "reset": true, 00:17:30.735 "compare": false, 00:17:30.735 "compare_and_write": false, 00:17:30.735 "abort": true, 00:17:30.735 "nvme_admin": false, 00:17:30.735 "nvme_io": false 00:17:30.735 }, 00:17:30.735 "memory_domains": [ 00:17:30.735 { 00:17:30.735 "dma_device_id": "system", 00:17:30.735 "dma_device_type": 1 00:17:30.735 }, 00:17:30.735 { 00:17:30.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.735 "dma_device_type": 2 00:17:30.735 } 00:17:30.735 ], 00:17:30.735 "driver_specific": { 00:17:30.735 "passthru": { 00:17:30.735 "name": "pt1", 00:17:30.735 "base_bdev_name": "malloc1" 00:17:30.735 } 00:17:30.735 } 00:17:30.735 }' 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.735 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:30.993 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.252 "name": "pt2", 00:17:31.252 "aliases": [ 00:17:31.252 "00000000-0000-0000-0000-000000000002" 00:17:31.252 ], 00:17:31.252 "product_name": "passthru", 00:17:31.252 "block_size": 4096, 00:17:31.252 "num_blocks": 8192, 00:17:31.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.252 "md_size": 32, 00:17:31.252 "md_interleave": false, 00:17:31.252 "dif_type": 0, 00:17:31.252 "assigned_rate_limits": { 00:17:31.252 "rw_ios_per_sec": 0, 00:17:31.252 "rw_mbytes_per_sec": 0, 00:17:31.252 "r_mbytes_per_sec": 0, 00:17:31.252 "w_mbytes_per_sec": 0 00:17:31.252 }, 00:17:31.252 "claimed": true, 00:17:31.252 "claim_type": "exclusive_write", 00:17:31.252 "zoned": false, 00:17:31.252 "supported_io_types": { 00:17:31.252 "read": true, 00:17:31.252 "write": true, 00:17:31.252 "unmap": true, 00:17:31.252 "write_zeroes": true, 00:17:31.252 "flush": true, 00:17:31.252 "reset": true, 00:17:31.252 "compare": false, 00:17:31.252 "compare_and_write": false, 00:17:31.252 "abort": true, 00:17:31.252 "nvme_admin": false, 00:17:31.252 "nvme_io": false 00:17:31.252 }, 00:17:31.252 "memory_domains": [ 00:17:31.252 { 00:17:31.252 "dma_device_id": "system", 00:17:31.252 "dma_device_type": 1 00:17:31.252 }, 00:17:31.252 { 00:17:31.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.252 "dma_device_type": 2 00:17:31.252 } 00:17:31.252 ], 00:17:31.252 "driver_specific": { 00:17:31.252 "passthru": { 00:17:31.252 "name": "pt2", 00:17:31.252 "base_bdev_name": "malloc2" 00:17:31.252 } 00:17:31.252 } 00:17:31.252 }' 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:31.252 10:21:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:31.511 [2024-06-10 10:21:37.098986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.769 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 3440ec16-2713-11ef-b084-113036b5c18d '!=' 3440ec16-2713-11ef-b084-113036b5c18d ']' 00:17:31.769 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:31.769 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:31.769 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:31.769 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:32.027 [2024-06-10 10:21:37.450978] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.027 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.284 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.285 "name": "raid_bdev1", 00:17:32.285 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:32.285 "strip_size_kb": 0, 00:17:32.285 "state": "online", 00:17:32.285 "raid_level": "raid1", 00:17:32.285 "superblock": true, 00:17:32.285 "num_base_bdevs": 2, 00:17:32.285 "num_base_bdevs_discovered": 1, 00:17:32.285 "num_base_bdevs_operational": 1, 00:17:32.285 "base_bdevs_list": [ 00:17:32.285 { 00:17:32.285 "name": null, 00:17:32.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.285 "is_configured": false, 00:17:32.285 "data_offset": 256, 00:17:32.285 "data_size": 7936 00:17:32.285 }, 00:17:32.285 { 00:17:32.285 "name": "pt2", 00:17:32.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.285 "is_configured": true, 00:17:32.285 "data_offset": 256, 00:17:32.285 "data_size": 7936 00:17:32.285 } 00:17:32.285 ] 00:17:32.285 }' 00:17:32.285 10:21:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.285 10:21:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.863 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.120 [2024-06-10 10:21:38.487013] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.120 [2024-06-10 10:21:38.487044] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.120 [2024-06-10 10:21:38.487064] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.120 [2024-06-10 10:21:38.487076] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.120 [2024-06-10 10:21:38.487081] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b283180 name raid_bdev1, state offline 00:17:33.120 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.120 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:33.376 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:33.376 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:33.376 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:33.376 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:33.376 10:21:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:33.773 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:33.773 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:33.773 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:33.773 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:33.773 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:17:33.773 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.030 [2024-06-10 10:21:39.503095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.030 [2024-06-10 10:21:39.503186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.030 [2024-06-10 10:21:39.503212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b282f00 00:17:34.030 [2024-06-10 10:21:39.503230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.030 [2024-06-10 10:21:39.503836] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.030 [2024-06-10 10:21:39.503893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.030 [2024-06-10 10:21:39.503936] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.030 [2024-06-10 10:21:39.503956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.030 [2024-06-10 10:21:39.503980] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b283180 00:17:34.030 [2024-06-10 10:21:39.503988] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.030 [2024-06-10 10:21:39.504027] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b2e5e20 00:17:34.030 [2024-06-10 10:21:39.504063] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b283180 00:17:34.030 [2024-06-10 10:21:39.504070] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b283180 00:17:34.030 [2024-06-10 10:21:39.504095] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.030 pt2 00:17:34.030 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.030 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.030 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.030 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.030 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.031 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.289 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.289 "name": "raid_bdev1", 00:17:34.289 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:34.289 "strip_size_kb": 0, 00:17:34.289 "state": "online", 00:17:34.289 "raid_level": "raid1", 00:17:34.289 "superblock": true, 00:17:34.289 "num_base_bdevs": 2, 00:17:34.289 "num_base_bdevs_discovered": 1, 00:17:34.289 "num_base_bdevs_operational": 1, 00:17:34.289 "base_bdevs_list": [ 00:17:34.289 { 00:17:34.289 "name": null, 00:17:34.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.289 "is_configured": false, 00:17:34.289 "data_offset": 256, 00:17:34.289 "data_size": 7936 00:17:34.289 }, 00:17:34.289 { 00:17:34.289 "name": "pt2", 00:17:34.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.289 "is_configured": true, 00:17:34.289 "data_offset": 256, 00:17:34.289 "data_size": 7936 00:17:34.289 } 00:17:34.289 ] 00:17:34.289 }' 00:17:34.289 10:21:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.289 10:21:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.547 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:34.805 [2024-06-10 10:21:40.259100] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.805 [2024-06-10 10:21:40.259126] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:34.805 [2024-06-10 10:21:40.259147] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:34.806 [2024-06-10 10:21:40.259159] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:34.806 [2024-06-10 10:21:40.259163] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b283180 name raid_bdev1, state offline 00:17:34.806 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.806 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:35.065 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:35.065 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:35.065 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:35.065 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.324 [2024-06-10 10:21:40.747140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.324 [2024-06-10 10:21:40.747213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.324 [2024-06-10 10:21:40.747225] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b282c80 00:17:35.324 [2024-06-10 10:21:40.747242] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.324 [2024-06-10 10:21:40.747738] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.324 [2024-06-10 10:21:40.747769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.324 [2024-06-10 10:21:40.747792] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:35.324 [2024-06-10 10:21:40.747802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.324 [2024-06-10 10:21:40.747820] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:35.324 [2024-06-10 10:21:40.747824] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.324 [2024-06-10 10:21:40.747830] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b282780 name raid_bdev1, state configuring 00:17:35.324 [2024-06-10 10:21:40.747837] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.324 [2024-06-10 10:21:40.747850] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b282780 00:17:35.324 [2024-06-10 10:21:40.747853] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.324 [2024-06-10 10:21:40.747872] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b2e5e20 00:17:35.324 [2024-06-10 10:21:40.747891] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b282780 00:17:35.324 [2024-06-10 10:21:40.747895] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b282780 00:17:35.324 [2024-06-10 10:21:40.747906] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.324 pt1 00:17:35.324 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:35.324 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.324 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.325 10:21:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.583 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.583 "name": "raid_bdev1", 00:17:35.583 "uuid": "3440ec16-2713-11ef-b084-113036b5c18d", 00:17:35.583 "strip_size_kb": 0, 00:17:35.583 "state": "online", 00:17:35.583 "raid_level": "raid1", 00:17:35.583 "superblock": true, 00:17:35.583 "num_base_bdevs": 2, 00:17:35.583 "num_base_bdevs_discovered": 1, 00:17:35.583 "num_base_bdevs_operational": 1, 00:17:35.583 "base_bdevs_list": [ 00:17:35.583 { 00:17:35.583 "name": null, 00:17:35.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.583 "is_configured": false, 00:17:35.583 "data_offset": 256, 00:17:35.583 "data_size": 7936 00:17:35.583 }, 00:17:35.583 { 00:17:35.583 "name": "pt2", 00:17:35.583 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.583 "is_configured": true, 00:17:35.583 "data_offset": 256, 00:17:35.583 "data_size": 7936 00:17:35.583 } 00:17:35.583 ] 00:17:35.583 }' 00:17:35.583 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.583 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.842 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:35.842 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:36.100 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:36.100 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:36.100 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:36.359 [2024-06-10 10:21:41.939245] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 3440ec16-2713-11ef-b084-113036b5c18d '!=' 3440ec16-2713-11ef-b084-113036b5c18d ']' 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 67271 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@949 -- # '[' -z 67271 ']' 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # kill -0 67271 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # uname 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:17:36.359 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # tail -1 00:17:36.617 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # ps -c -o command 67271 00:17:36.617 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:17:36.617 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:17:36.617 killing process with pid 67271 00:17:36.617 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67271' 00:17:36.617 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # kill 67271 00:17:36.617 [2024-06-10 10:21:41.971464] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:36.617 10:21:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # wait 67271 00:17:36.617 [2024-06-10 10:21:41.971503] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.617 [2024-06-10 10:21:41.971526] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.617 [2024-06-10 10:21:41.971533] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b282780 name raid_bdev1, state offline 00:17:36.617 [2024-06-10 10:21:41.981320] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.617 10:21:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:17:36.618 00:17:36.618 real 0m13.668s 00:17:36.618 user 0m24.599s 00:17:36.618 sys 0m1.968s 00:17:36.618 10:21:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:36.618 ************************************ 00:17:36.618 END TEST raid_superblock_test_md_separate 00:17:36.618 ************************************ 00:17:36.618 10:21:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.618 10:21:42 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:17:36.618 10:21:42 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:17:36.618 10:21:42 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:36.618 10:21:42 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:17:36.618 10:21:42 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:36.618 10:21:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.618 ************************************ 00:17:36.618 START TEST raid_state_function_test_sb_md_interleaved 00:17:36.618 ************************************ 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # raid_state_function_test raid1 2 true 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=67666 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 67666' 00:17:36.618 Process raid pid: 67666 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 67666 /var/tmp/spdk-raid.sock 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 67666 ']' 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:36.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:36.618 10:21:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.618 [2024-06-10 10:21:42.207140] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:36.618 [2024-06-10 10:21:42.207574] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:37.183 EAL: TSC is not safe to use in SMP mode 00:17:37.183 EAL: TSC is not invariant 00:17:37.183 [2024-06-10 10:21:42.712930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.442 [2024-06-10 10:21:42.812552] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:37.442 [2024-06-10 10:21:42.815307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.442 [2024-06-10 10:21:42.816236] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.442 [2024-06-10 10:21:42.816254] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.701 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:37.701 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:17:37.701 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:37.959 [2024-06-10 10:21:43.469266] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.959 [2024-06-10 10:21:43.469331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.959 [2024-06-10 10:21:43.469336] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.959 [2024-06-10 10:21:43.469345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.960 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.219 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.219 "name": "Existed_Raid", 00:17:38.219 "uuid": "3be7eff2-2713-11ef-b084-113036b5c18d", 00:17:38.219 "strip_size_kb": 0, 00:17:38.219 "state": "configuring", 00:17:38.219 "raid_level": "raid1", 00:17:38.219 "superblock": true, 00:17:38.219 "num_base_bdevs": 2, 00:17:38.219 "num_base_bdevs_discovered": 0, 00:17:38.219 "num_base_bdevs_operational": 2, 00:17:38.219 "base_bdevs_list": [ 00:17:38.219 { 00:17:38.219 "name": "BaseBdev1", 00:17:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.219 "is_configured": false, 00:17:38.219 "data_offset": 0, 00:17:38.219 "data_size": 0 00:17:38.219 }, 00:17:38.219 { 00:17:38.219 "name": "BaseBdev2", 00:17:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.219 "is_configured": false, 00:17:38.219 "data_offset": 0, 00:17:38.219 "data_size": 0 00:17:38.219 } 00:17:38.219 ] 00:17:38.219 }' 00:17:38.219 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.219 10:21:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.785 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:39.044 [2024-06-10 10:21:44.409273] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.044 [2024-06-10 10:21:44.409305] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c406500 name Existed_Raid, state configuring 00:17:39.044 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:39.302 [2024-06-10 10:21:44.681315] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.302 [2024-06-10 10:21:44.681364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.302 [2024-06-10 10:21:44.681369] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.302 [2024-06-10 10:21:44.681377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.302 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:39.561 [2024-06-10 10:21:44.934199] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.561 BaseBdev1 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev1 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local i 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:39.561 10:21:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.820 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.079 [ 00:17:40.079 { 00:17:40.079 "name": "BaseBdev1", 00:17:40.079 "aliases": [ 00:17:40.079 "3cc7565c-2713-11ef-b084-113036b5c18d" 00:17:40.079 ], 00:17:40.079 "product_name": "Malloc disk", 00:17:40.079 "block_size": 4128, 00:17:40.079 "num_blocks": 8192, 00:17:40.079 "uuid": "3cc7565c-2713-11ef-b084-113036b5c18d", 00:17:40.079 "md_size": 32, 00:17:40.079 "md_interleave": true, 00:17:40.079 "dif_type": 0, 00:17:40.079 "assigned_rate_limits": { 00:17:40.079 "rw_ios_per_sec": 0, 00:17:40.079 "rw_mbytes_per_sec": 0, 00:17:40.079 "r_mbytes_per_sec": 0, 00:17:40.079 "w_mbytes_per_sec": 0 00:17:40.079 }, 00:17:40.079 "claimed": true, 00:17:40.079 "claim_type": "exclusive_write", 00:17:40.079 "zoned": false, 00:17:40.079 "supported_io_types": { 00:17:40.079 "read": true, 00:17:40.079 "write": true, 00:17:40.079 "unmap": true, 00:17:40.079 "write_zeroes": true, 00:17:40.079 "flush": true, 00:17:40.079 "reset": true, 00:17:40.079 "compare": false, 00:17:40.079 "compare_and_write": false, 00:17:40.079 "abort": true, 00:17:40.079 "nvme_admin": false, 00:17:40.079 "nvme_io": false 00:17:40.079 }, 00:17:40.079 "memory_domains": [ 00:17:40.079 { 00:17:40.079 "dma_device_id": "system", 00:17:40.079 "dma_device_type": 1 00:17:40.079 }, 00:17:40.079 { 00:17:40.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.079 "dma_device_type": 2 00:17:40.079 } 00:17:40.079 ], 00:17:40.079 "driver_specific": {} 00:17:40.079 } 00:17:40.079 ] 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # return 0 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.079 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.079 "name": "Existed_Raid", 00:17:40.079 "uuid": "3ca0e1a0-2713-11ef-b084-113036b5c18d", 00:17:40.079 "strip_size_kb": 0, 00:17:40.079 "state": "configuring", 00:17:40.079 "raid_level": "raid1", 00:17:40.079 "superblock": true, 00:17:40.079 "num_base_bdevs": 2, 00:17:40.079 "num_base_bdevs_discovered": 1, 00:17:40.079 "num_base_bdevs_operational": 2, 00:17:40.079 "base_bdevs_list": [ 00:17:40.079 { 00:17:40.079 "name": "BaseBdev1", 00:17:40.079 "uuid": "3cc7565c-2713-11ef-b084-113036b5c18d", 00:17:40.079 "is_configured": true, 00:17:40.079 "data_offset": 256, 00:17:40.079 "data_size": 7936 00:17:40.079 }, 00:17:40.079 { 00:17:40.079 "name": "BaseBdev2", 00:17:40.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.079 "is_configured": false, 00:17:40.079 "data_offset": 0, 00:17:40.079 "data_size": 0 00:17:40.079 } 00:17:40.079 ] 00:17:40.079 }' 00:17:40.080 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.080 10:21:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.661 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.661 [2024-06-10 10:21:46.261360] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.661 [2024-06-10 10:21:46.261411] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c406500 name Existed_Raid, state configuring 00:17:40.918 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:40.918 [2024-06-10 10:21:46.513391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.918 [2024-06-10 10:21:46.514085] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.918 [2024-06-10 10:21:46.514130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.175 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.432 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.432 "name": "Existed_Raid", 00:17:41.432 "uuid": "3db86ef8-2713-11ef-b084-113036b5c18d", 00:17:41.432 "strip_size_kb": 0, 00:17:41.432 "state": "configuring", 00:17:41.432 "raid_level": "raid1", 00:17:41.432 "superblock": true, 00:17:41.432 "num_base_bdevs": 2, 00:17:41.432 "num_base_bdevs_discovered": 1, 00:17:41.432 "num_base_bdevs_operational": 2, 00:17:41.432 "base_bdevs_list": [ 00:17:41.432 { 00:17:41.432 "name": "BaseBdev1", 00:17:41.432 "uuid": "3cc7565c-2713-11ef-b084-113036b5c18d", 00:17:41.432 "is_configured": true, 00:17:41.432 "data_offset": 256, 00:17:41.432 "data_size": 7936 00:17:41.432 }, 00:17:41.432 { 00:17:41.432 "name": "BaseBdev2", 00:17:41.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.432 "is_configured": false, 00:17:41.432 "data_offset": 0, 00:17:41.432 "data_size": 0 00:17:41.432 } 00:17:41.432 ] 00:17:41.432 }' 00:17:41.432 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.432 10:21:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.690 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:41.948 [2024-06-10 10:21:47.429483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.948 [2024-06-10 10:21:47.429537] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c406a00 00:17:41.948 [2024-06-10 10:21:47.429543] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:41.948 [2024-06-10 10:21:47.429562] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c469e20 00:17:41.948 [2024-06-10 10:21:47.429574] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c406a00 00:17:41.948 [2024-06-10 10:21:47.429578] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c406a00 00:17:41.948 [2024-06-10 10:21:47.429587] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.948 BaseBdev2 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_name=BaseBdev2 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local i 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:17:41.948 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.206 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:42.465 [ 00:17:42.465 { 00:17:42.465 "name": "BaseBdev2", 00:17:42.465 "aliases": [ 00:17:42.465 "3e443614-2713-11ef-b084-113036b5c18d" 00:17:42.465 ], 00:17:42.465 "product_name": "Malloc disk", 00:17:42.465 "block_size": 4128, 00:17:42.465 "num_blocks": 8192, 00:17:42.465 "uuid": "3e443614-2713-11ef-b084-113036b5c18d", 00:17:42.465 "md_size": 32, 00:17:42.465 "md_interleave": true, 00:17:42.465 "dif_type": 0, 00:17:42.465 "assigned_rate_limits": { 00:17:42.465 "rw_ios_per_sec": 0, 00:17:42.465 "rw_mbytes_per_sec": 0, 00:17:42.465 "r_mbytes_per_sec": 0, 00:17:42.465 "w_mbytes_per_sec": 0 00:17:42.465 }, 00:17:42.465 "claimed": true, 00:17:42.465 "claim_type": "exclusive_write", 00:17:42.465 "zoned": false, 00:17:42.465 "supported_io_types": { 00:17:42.465 "read": true, 00:17:42.465 "write": true, 00:17:42.465 "unmap": true, 00:17:42.465 "write_zeroes": true, 00:17:42.465 "flush": true, 00:17:42.465 "reset": true, 00:17:42.465 "compare": false, 00:17:42.465 "compare_and_write": false, 00:17:42.465 "abort": true, 00:17:42.465 "nvme_admin": false, 00:17:42.465 "nvme_io": false 00:17:42.465 }, 00:17:42.465 "memory_domains": [ 00:17:42.465 { 00:17:42.465 "dma_device_id": "system", 00:17:42.465 "dma_device_type": 1 00:17:42.465 }, 00:17:42.465 { 00:17:42.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.465 "dma_device_type": 2 00:17:42.465 } 00:17:42.465 ], 00:17:42.465 "driver_specific": {} 00:17:42.465 } 00:17:42.465 ] 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # return 0 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.465 10:21:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.723 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.723 "name": "Existed_Raid", 00:17:42.723 "uuid": "3db86ef8-2713-11ef-b084-113036b5c18d", 00:17:42.723 "strip_size_kb": 0, 00:17:42.723 "state": "online", 00:17:42.723 "raid_level": "raid1", 00:17:42.723 "superblock": true, 00:17:42.723 "num_base_bdevs": 2, 00:17:42.723 "num_base_bdevs_discovered": 2, 00:17:42.723 "num_base_bdevs_operational": 2, 00:17:42.723 "base_bdevs_list": [ 00:17:42.723 { 00:17:42.723 "name": "BaseBdev1", 00:17:42.723 "uuid": "3cc7565c-2713-11ef-b084-113036b5c18d", 00:17:42.723 "is_configured": true, 00:17:42.723 "data_offset": 256, 00:17:42.723 "data_size": 7936 00:17:42.723 }, 00:17:42.723 { 00:17:42.723 "name": "BaseBdev2", 00:17:42.723 "uuid": "3e443614-2713-11ef-b084-113036b5c18d", 00:17:42.723 "is_configured": true, 00:17:42.723 "data_offset": 256, 00:17:42.723 "data_size": 7936 00:17:42.723 } 00:17:42.723 ] 00:17:42.723 }' 00:17:42.723 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.723 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:43.049 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:43.309 [2024-06-10 10:21:48.809538] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.309 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:43.309 "name": "Existed_Raid", 00:17:43.309 "aliases": [ 00:17:43.309 "3db86ef8-2713-11ef-b084-113036b5c18d" 00:17:43.309 ], 00:17:43.309 "product_name": "Raid Volume", 00:17:43.309 "block_size": 4128, 00:17:43.309 "num_blocks": 7936, 00:17:43.309 "uuid": "3db86ef8-2713-11ef-b084-113036b5c18d", 00:17:43.309 "md_size": 32, 00:17:43.309 "md_interleave": true, 00:17:43.309 "dif_type": 0, 00:17:43.309 "assigned_rate_limits": { 00:17:43.309 "rw_ios_per_sec": 0, 00:17:43.309 "rw_mbytes_per_sec": 0, 00:17:43.309 "r_mbytes_per_sec": 0, 00:17:43.309 "w_mbytes_per_sec": 0 00:17:43.309 }, 00:17:43.309 "claimed": false, 00:17:43.309 "zoned": false, 00:17:43.309 "supported_io_types": { 00:17:43.309 "read": true, 00:17:43.309 "write": true, 00:17:43.309 "unmap": false, 00:17:43.309 "write_zeroes": true, 00:17:43.309 "flush": false, 00:17:43.309 "reset": true, 00:17:43.309 "compare": false, 00:17:43.309 "compare_and_write": false, 00:17:43.309 "abort": false, 00:17:43.309 "nvme_admin": false, 00:17:43.309 "nvme_io": false 00:17:43.309 }, 00:17:43.309 "memory_domains": [ 00:17:43.309 { 00:17:43.309 "dma_device_id": "system", 00:17:43.309 "dma_device_type": 1 00:17:43.309 }, 00:17:43.309 { 00:17:43.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.309 "dma_device_type": 2 00:17:43.309 }, 00:17:43.309 { 00:17:43.309 "dma_device_id": "system", 00:17:43.309 "dma_device_type": 1 00:17:43.309 }, 00:17:43.309 { 00:17:43.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.309 "dma_device_type": 2 00:17:43.309 } 00:17:43.309 ], 00:17:43.309 "driver_specific": { 00:17:43.309 "raid": { 00:17:43.309 "uuid": "3db86ef8-2713-11ef-b084-113036b5c18d", 00:17:43.309 "strip_size_kb": 0, 00:17:43.309 "state": "online", 00:17:43.309 "raid_level": "raid1", 00:17:43.309 "superblock": true, 00:17:43.309 "num_base_bdevs": 2, 00:17:43.309 "num_base_bdevs_discovered": 2, 00:17:43.309 "num_base_bdevs_operational": 2, 00:17:43.309 "base_bdevs_list": [ 00:17:43.309 { 00:17:43.309 "name": "BaseBdev1", 00:17:43.309 "uuid": "3cc7565c-2713-11ef-b084-113036b5c18d", 00:17:43.309 "is_configured": true, 00:17:43.309 "data_offset": 256, 00:17:43.309 "data_size": 7936 00:17:43.309 }, 00:17:43.309 { 00:17:43.309 "name": "BaseBdev2", 00:17:43.309 "uuid": "3e443614-2713-11ef-b084-113036b5c18d", 00:17:43.309 "is_configured": true, 00:17:43.309 "data_offset": 256, 00:17:43.309 "data_size": 7936 00:17:43.309 } 00:17:43.309 ] 00:17:43.309 } 00:17:43.309 } 00:17:43.309 }' 00:17:43.309 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.309 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:43.309 BaseBdev2' 00:17:43.309 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.309 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:43.309 10:21:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.567 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.567 "name": "BaseBdev1", 00:17:43.567 "aliases": [ 00:17:43.567 "3cc7565c-2713-11ef-b084-113036b5c18d" 00:17:43.567 ], 00:17:43.567 "product_name": "Malloc disk", 00:17:43.567 "block_size": 4128, 00:17:43.567 "num_blocks": 8192, 00:17:43.567 "uuid": "3cc7565c-2713-11ef-b084-113036b5c18d", 00:17:43.567 "md_size": 32, 00:17:43.567 "md_interleave": true, 00:17:43.567 "dif_type": 0, 00:17:43.567 "assigned_rate_limits": { 00:17:43.567 "rw_ios_per_sec": 0, 00:17:43.567 "rw_mbytes_per_sec": 0, 00:17:43.567 "r_mbytes_per_sec": 0, 00:17:43.567 "w_mbytes_per_sec": 0 00:17:43.567 }, 00:17:43.567 "claimed": true, 00:17:43.567 "claim_type": "exclusive_write", 00:17:43.567 "zoned": false, 00:17:43.567 "supported_io_types": { 00:17:43.567 "read": true, 00:17:43.567 "write": true, 00:17:43.567 "unmap": true, 00:17:43.567 "write_zeroes": true, 00:17:43.567 "flush": true, 00:17:43.567 "reset": true, 00:17:43.567 "compare": false, 00:17:43.567 "compare_and_write": false, 00:17:43.567 "abort": true, 00:17:43.567 "nvme_admin": false, 00:17:43.567 "nvme_io": false 00:17:43.567 }, 00:17:43.568 "memory_domains": [ 00:17:43.568 { 00:17:43.568 "dma_device_id": "system", 00:17:43.568 "dma_device_type": 1 00:17:43.568 }, 00:17:43.568 { 00:17:43.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.568 "dma_device_type": 2 00:17:43.568 } 00:17:43.568 ], 00:17:43.568 "driver_specific": {} 00:17:43.568 }' 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:43.568 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:43.826 "name": "BaseBdev2", 00:17:43.826 "aliases": [ 00:17:43.826 "3e443614-2713-11ef-b084-113036b5c18d" 00:17:43.826 ], 00:17:43.826 "product_name": "Malloc disk", 00:17:43.826 "block_size": 4128, 00:17:43.826 "num_blocks": 8192, 00:17:43.826 "uuid": "3e443614-2713-11ef-b084-113036b5c18d", 00:17:43.826 "md_size": 32, 00:17:43.826 "md_interleave": true, 00:17:43.826 "dif_type": 0, 00:17:43.826 "assigned_rate_limits": { 00:17:43.826 "rw_ios_per_sec": 0, 00:17:43.826 "rw_mbytes_per_sec": 0, 00:17:43.826 "r_mbytes_per_sec": 0, 00:17:43.826 "w_mbytes_per_sec": 0 00:17:43.826 }, 00:17:43.826 "claimed": true, 00:17:43.826 "claim_type": "exclusive_write", 00:17:43.826 "zoned": false, 00:17:43.826 "supported_io_types": { 00:17:43.826 "read": true, 00:17:43.826 "write": true, 00:17:43.826 "unmap": true, 00:17:43.826 "write_zeroes": true, 00:17:43.826 "flush": true, 00:17:43.826 "reset": true, 00:17:43.826 "compare": false, 00:17:43.826 "compare_and_write": false, 00:17:43.826 "abort": true, 00:17:43.826 "nvme_admin": false, 00:17:43.826 "nvme_io": false 00:17:43.826 }, 00:17:43.826 "memory_domains": [ 00:17:43.826 { 00:17:43.826 "dma_device_id": "system", 00:17:43.826 "dma_device_type": 1 00:17:43.826 }, 00:17:43.826 { 00:17:43.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.826 "dma_device_type": 2 00:17:43.826 } 00:17:43.826 ], 00:17:43.826 "driver_specific": {} 00:17:43.826 }' 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:43.826 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.082 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:44.082 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:44.082 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.082 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:44.082 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:44.082 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:44.339 [2024-06-10 10:21:49.725518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.339 10:21:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.596 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:44.596 "name": "Existed_Raid", 00:17:44.596 "uuid": "3db86ef8-2713-11ef-b084-113036b5c18d", 00:17:44.596 "strip_size_kb": 0, 00:17:44.596 "state": "online", 00:17:44.596 "raid_level": "raid1", 00:17:44.596 "superblock": true, 00:17:44.596 "num_base_bdevs": 2, 00:17:44.596 "num_base_bdevs_discovered": 1, 00:17:44.596 "num_base_bdevs_operational": 1, 00:17:44.596 "base_bdevs_list": [ 00:17:44.596 { 00:17:44.596 "name": null, 00:17:44.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.596 "is_configured": false, 00:17:44.596 "data_offset": 256, 00:17:44.596 "data_size": 7936 00:17:44.596 }, 00:17:44.596 { 00:17:44.596 "name": "BaseBdev2", 00:17:44.596 "uuid": "3e443614-2713-11ef-b084-113036b5c18d", 00:17:44.596 "is_configured": true, 00:17:44.596 "data_offset": 256, 00:17:44.596 "data_size": 7936 00:17:44.596 } 00:17:44.596 ] 00:17:44.596 }' 00:17:44.596 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:44.596 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.852 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:44.852 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:44.852 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:44.852 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.109 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:45.109 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.109 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:45.366 [2024-06-10 10:21:50.818510] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.366 [2024-06-10 10:21:50.818551] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.366 [2024-06-10 10:21:50.823417] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.366 [2024-06-10 10:21:50.823433] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.366 [2024-06-10 10:21:50.823437] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c406a00 name Existed_Raid, state offline 00:17:45.366 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:45.366 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:45.366 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.366 10:21:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 67666 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 67666 ']' 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 67666 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # ps -c -o command 67666 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # tail -1 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:17:45.622 killing process with pid 67666 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67666' 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # kill 67666 00:17:45.622 [2024-06-10 10:21:51.107466] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.622 [2024-06-10 10:21:51.107512] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.622 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # wait 67666 00:17:45.880 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:17:45.880 00:17:45.880 real 0m9.086s 00:17:45.880 user 0m16.084s 00:17:45.880 sys 0m1.368s 00:17:45.880 ************************************ 00:17:45.880 END TEST raid_state_function_test_sb_md_interleaved 00:17:45.880 ************************************ 00:17:45.880 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:45.880 10:21:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.880 10:21:51 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:45.880 10:21:51 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:17:45.880 10:21:51 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:45.880 10:21:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.880 ************************************ 00:17:45.880 START TEST raid_superblock_test_md_interleaved 00:17:45.880 ************************************ 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # raid_superblock_test raid1 2 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=67936 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 67936 /var/tmp/spdk-raid.sock 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 67936 ']' 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:45.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:45.880 10:21:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.880 [2024-06-10 10:21:51.327617] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:45.880 [2024-06-10 10:21:51.327848] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:46.444 EAL: TSC is not safe to use in SMP mode 00:17:46.444 EAL: TSC is not invariant 00:17:46.444 [2024-06-10 10:21:51.797603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.444 [2024-06-10 10:21:51.882586] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:46.444 [2024-06-10 10:21:51.884718] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.444 [2024-06-10 10:21:51.885424] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.444 [2024-06-10 10:21:51.885436] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.009 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:47.266 malloc1 00:17:47.266 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.524 [2024-06-10 10:21:52.928583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.524 [2024-06-10 10:21:52.928653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.524 [2024-06-10 10:21:52.928666] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd23780 00:17:47.524 [2024-06-10 10:21:52.928674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.524 [2024-06-10 10:21:52.929399] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.524 [2024-06-10 10:21:52.929422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.524 pt1 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.524 10:21:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:47.781 malloc2 00:17:47.781 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.038 [2024-06-10 10:21:53.404572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.038 [2024-06-10 10:21:53.404629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.038 [2024-06-10 10:21:53.404639] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd23c80 00:17:48.038 [2024-06-10 10:21:53.404649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.038 [2024-06-10 10:21:53.405075] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.038 [2024-06-10 10:21:53.405098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.038 pt2 00:17:48.038 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:48.038 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:48.038 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:48.038 [2024-06-10 10:21:53.640582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.038 [2024-06-10 10:21:53.640982] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.038 [2024-06-10 10:21:53.641031] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd23f00 00:17:48.038 [2024-06-10 10:21:53.641036] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:48.038 [2024-06-10 10:21:53.641075] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cd86e20 00:17:48.038 [2024-06-10 10:21:53.641089] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd23f00 00:17:48.039 [2024-06-10 10:21:53.641092] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd23f00 00:17:48.039 [2024-06-10 10:21:53.641102] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.296 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.297 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.297 "name": "raid_bdev1", 00:17:48.297 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:48.297 "strip_size_kb": 0, 00:17:48.297 "state": "online", 00:17:48.297 "raid_level": "raid1", 00:17:48.297 "superblock": true, 00:17:48.297 "num_base_bdevs": 2, 00:17:48.297 "num_base_bdevs_discovered": 2, 00:17:48.297 "num_base_bdevs_operational": 2, 00:17:48.297 "base_bdevs_list": [ 00:17:48.297 { 00:17:48.297 "name": "pt1", 00:17:48.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.297 "is_configured": true, 00:17:48.297 "data_offset": 256, 00:17:48.297 "data_size": 7936 00:17:48.297 }, 00:17:48.297 { 00:17:48.297 "name": "pt2", 00:17:48.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.297 "is_configured": true, 00:17:48.297 "data_offset": 256, 00:17:48.297 "data_size": 7936 00:17:48.297 } 00:17:48.297 ] 00:17:48.297 }' 00:17:48.297 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.297 10:21:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:48.861 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:49.118 [2024-06-10 10:21:54.504648] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.118 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:49.118 "name": "raid_bdev1", 00:17:49.118 "aliases": [ 00:17:49.118 "41f7f4fc-2713-11ef-b084-113036b5c18d" 00:17:49.118 ], 00:17:49.118 "product_name": "Raid Volume", 00:17:49.118 "block_size": 4128, 00:17:49.118 "num_blocks": 7936, 00:17:49.118 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:49.118 "md_size": 32, 00:17:49.118 "md_interleave": true, 00:17:49.118 "dif_type": 0, 00:17:49.119 "assigned_rate_limits": { 00:17:49.119 "rw_ios_per_sec": 0, 00:17:49.119 "rw_mbytes_per_sec": 0, 00:17:49.119 "r_mbytes_per_sec": 0, 00:17:49.119 "w_mbytes_per_sec": 0 00:17:49.119 }, 00:17:49.119 "claimed": false, 00:17:49.119 "zoned": false, 00:17:49.119 "supported_io_types": { 00:17:49.119 "read": true, 00:17:49.119 "write": true, 00:17:49.119 "unmap": false, 00:17:49.119 "write_zeroes": true, 00:17:49.119 "flush": false, 00:17:49.119 "reset": true, 00:17:49.119 "compare": false, 00:17:49.119 "compare_and_write": false, 00:17:49.119 "abort": false, 00:17:49.119 "nvme_admin": false, 00:17:49.119 "nvme_io": false 00:17:49.119 }, 00:17:49.119 "memory_domains": [ 00:17:49.119 { 00:17:49.119 "dma_device_id": "system", 00:17:49.119 "dma_device_type": 1 00:17:49.119 }, 00:17:49.119 { 00:17:49.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.119 "dma_device_type": 2 00:17:49.119 }, 00:17:49.119 { 00:17:49.119 "dma_device_id": "system", 00:17:49.119 "dma_device_type": 1 00:17:49.119 }, 00:17:49.119 { 00:17:49.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.119 "dma_device_type": 2 00:17:49.119 } 00:17:49.119 ], 00:17:49.119 "driver_specific": { 00:17:49.119 "raid": { 00:17:49.119 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:49.119 "strip_size_kb": 0, 00:17:49.119 "state": "online", 00:17:49.119 "raid_level": "raid1", 00:17:49.119 "superblock": true, 00:17:49.119 "num_base_bdevs": 2, 00:17:49.119 "num_base_bdevs_discovered": 2, 00:17:49.119 "num_base_bdevs_operational": 2, 00:17:49.119 "base_bdevs_list": [ 00:17:49.119 { 00:17:49.119 "name": "pt1", 00:17:49.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.119 "is_configured": true, 00:17:49.119 "data_offset": 256, 00:17:49.119 "data_size": 7936 00:17:49.119 }, 00:17:49.119 { 00:17:49.119 "name": "pt2", 00:17:49.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.119 "is_configured": true, 00:17:49.119 "data_offset": 256, 00:17:49.119 "data_size": 7936 00:17:49.119 } 00:17:49.119 ] 00:17:49.119 } 00:17:49.119 } 00:17:49.119 }' 00:17:49.119 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.119 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:49.119 pt2' 00:17:49.119 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.119 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:49.119 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.396 "name": "pt1", 00:17:49.396 "aliases": [ 00:17:49.396 "00000000-0000-0000-0000-000000000001" 00:17:49.396 ], 00:17:49.396 "product_name": "passthru", 00:17:49.396 "block_size": 4128, 00:17:49.396 "num_blocks": 8192, 00:17:49.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.396 "md_size": 32, 00:17:49.396 "md_interleave": true, 00:17:49.396 "dif_type": 0, 00:17:49.396 "assigned_rate_limits": { 00:17:49.396 "rw_ios_per_sec": 0, 00:17:49.396 "rw_mbytes_per_sec": 0, 00:17:49.396 "r_mbytes_per_sec": 0, 00:17:49.396 "w_mbytes_per_sec": 0 00:17:49.396 }, 00:17:49.396 "claimed": true, 00:17:49.396 "claim_type": "exclusive_write", 00:17:49.396 "zoned": false, 00:17:49.396 "supported_io_types": { 00:17:49.396 "read": true, 00:17:49.396 "write": true, 00:17:49.396 "unmap": true, 00:17:49.396 "write_zeroes": true, 00:17:49.396 "flush": true, 00:17:49.396 "reset": true, 00:17:49.396 "compare": false, 00:17:49.396 "compare_and_write": false, 00:17:49.396 "abort": true, 00:17:49.396 "nvme_admin": false, 00:17:49.396 "nvme_io": false 00:17:49.396 }, 00:17:49.396 "memory_domains": [ 00:17:49.396 { 00:17:49.396 "dma_device_id": "system", 00:17:49.396 "dma_device_type": 1 00:17:49.396 }, 00:17:49.396 { 00:17:49.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.396 "dma_device_type": 2 00:17:49.396 } 00:17:49.396 ], 00:17:49.396 "driver_specific": { 00:17:49.396 "passthru": { 00:17:49.396 "name": "pt1", 00:17:49.396 "base_bdev_name": "malloc1" 00:17:49.396 } 00:17:49.396 } 00:17:49.396 }' 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:49.396 10:21:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.653 "name": "pt2", 00:17:49.653 "aliases": [ 00:17:49.653 "00000000-0000-0000-0000-000000000002" 00:17:49.653 ], 00:17:49.653 "product_name": "passthru", 00:17:49.653 "block_size": 4128, 00:17:49.653 "num_blocks": 8192, 00:17:49.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.653 "md_size": 32, 00:17:49.653 "md_interleave": true, 00:17:49.653 "dif_type": 0, 00:17:49.653 "assigned_rate_limits": { 00:17:49.653 "rw_ios_per_sec": 0, 00:17:49.653 "rw_mbytes_per_sec": 0, 00:17:49.653 "r_mbytes_per_sec": 0, 00:17:49.653 "w_mbytes_per_sec": 0 00:17:49.653 }, 00:17:49.653 "claimed": true, 00:17:49.653 "claim_type": "exclusive_write", 00:17:49.653 "zoned": false, 00:17:49.653 "supported_io_types": { 00:17:49.653 "read": true, 00:17:49.653 "write": true, 00:17:49.653 "unmap": true, 00:17:49.653 "write_zeroes": true, 00:17:49.653 "flush": true, 00:17:49.653 "reset": true, 00:17:49.653 "compare": false, 00:17:49.653 "compare_and_write": false, 00:17:49.653 "abort": true, 00:17:49.653 "nvme_admin": false, 00:17:49.653 "nvme_io": false 00:17:49.653 }, 00:17:49.653 "memory_domains": [ 00:17:49.653 { 00:17:49.653 "dma_device_id": "system", 00:17:49.653 "dma_device_type": 1 00:17:49.653 }, 00:17:49.653 { 00:17:49.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.653 "dma_device_type": 2 00:17:49.653 } 00:17:49.653 ], 00:17:49.653 "driver_specific": { 00:17:49.653 "passthru": { 00:17:49.653 "name": "pt2", 00:17:49.653 "base_bdev_name": "malloc2" 00:17:49.653 } 00:17:49.653 } 00:17:49.653 }' 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.653 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:49.911 [2024-06-10 10:21:55.316648] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.911 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=41f7f4fc-2713-11ef-b084-113036b5c18d 00:17:49.911 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 41f7f4fc-2713-11ef-b084-113036b5c18d ']' 00:17:49.911 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.169 [2024-06-10 10:21:55.568651] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.169 [2024-06-10 10:21:55.568672] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.169 [2024-06-10 10:21:55.568688] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.169 [2024-06-10 10:21:55.568701] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.169 [2024-06-10 10:21:55.568705] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd23f00 name raid_bdev1, state offline 00:17:50.169 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.169 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:50.427 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:50.427 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:50.427 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.427 10:21:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:50.684 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.684 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:50.684 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:50.684 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:50.942 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:50.942 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:50.942 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@649 -- # local es=0 00:17:50.942 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:50.942 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:50.943 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:51.202 [2024-06-10 10:21:56.788725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:51.202 [2024-06-10 10:21:56.789211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:51.202 [2024-06-10 10:21:56.789229] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:51.202 [2024-06-10 10:21:56.789262] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:51.202 [2024-06-10 10:21:56.789272] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.202 [2024-06-10 10:21:56.789276] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd23c80 name raid_bdev1, state configuring 00:17:51.202 request: 00:17:51.202 { 00:17:51.202 "name": "raid_bdev1", 00:17:51.202 "raid_level": "raid1", 00:17:51.202 "base_bdevs": [ 00:17:51.202 "malloc1", 00:17:51.202 "malloc2" 00:17:51.202 ], 00:17:51.202 "superblock": false, 00:17:51.202 "method": "bdev_raid_create", 00:17:51.202 "req_id": 1 00:17:51.202 } 00:17:51.202 Got JSON-RPC error response 00:17:51.202 response: 00:17:51.202 { 00:17:51.202 "code": -17, 00:17:51.202 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:51.202 } 00:17:51.460 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # es=1 00:17:51.460 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:51.460 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:51.460 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:51.460 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.460 10:21:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:51.460 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:51.460 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:51.460 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.718 [2024-06-10 10:21:57.244731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.718 [2024-06-10 10:21:57.244805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.718 [2024-06-10 10:21:57.244815] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd23780 00:17:51.718 [2024-06-10 10:21:57.244831] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.718 [2024-06-10 10:21:57.245297] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.718 [2024-06-10 10:21:57.245319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.718 [2024-06-10 10:21:57.245337] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:51.718 [2024-06-10 10:21:57.245348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.718 pt1 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.718 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.976 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.976 "name": "raid_bdev1", 00:17:51.976 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:51.976 "strip_size_kb": 0, 00:17:51.976 "state": "configuring", 00:17:51.976 "raid_level": "raid1", 00:17:51.976 "superblock": true, 00:17:51.976 "num_base_bdevs": 2, 00:17:51.976 "num_base_bdevs_discovered": 1, 00:17:51.976 "num_base_bdevs_operational": 2, 00:17:51.976 "base_bdevs_list": [ 00:17:51.976 { 00:17:51.976 "name": "pt1", 00:17:51.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.976 "is_configured": true, 00:17:51.976 "data_offset": 256, 00:17:51.976 "data_size": 7936 00:17:51.976 }, 00:17:51.976 { 00:17:51.976 "name": null, 00:17:51.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.976 "is_configured": false, 00:17:51.976 "data_offset": 256, 00:17:51.976 "data_size": 7936 00:17:51.976 } 00:17:51.976 ] 00:17:51.976 }' 00:17:51.976 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.976 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.234 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:52.234 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:52.234 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:52.234 10:21:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.491 [2024-06-10 10:21:58.092792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.491 [2024-06-10 10:21:58.092863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.491 [2024-06-10 10:21:58.092877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd23f00 00:17:52.491 [2024-06-10 10:21:58.092885] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.491 [2024-06-10 10:21:58.092938] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.491 [2024-06-10 10:21:58.092947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.491 [2024-06-10 10:21:58.092963] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:52.491 [2024-06-10 10:21:58.092971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.491 [2024-06-10 10:21:58.092998] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd24180 00:17:52.491 [2024-06-10 10:21:58.093002] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:52.491 [2024-06-10 10:21:58.093019] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cd86e20 00:17:52.491 [2024-06-10 10:21:58.093031] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd24180 00:17:52.491 [2024-06-10 10:21:58.093034] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd24180 00:17:52.491 [2024-06-10 10:21:58.093044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.491 pt2 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.750 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.007 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.007 "name": "raid_bdev1", 00:17:53.007 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:53.007 "strip_size_kb": 0, 00:17:53.007 "state": "online", 00:17:53.007 "raid_level": "raid1", 00:17:53.007 "superblock": true, 00:17:53.007 "num_base_bdevs": 2, 00:17:53.007 "num_base_bdevs_discovered": 2, 00:17:53.007 "num_base_bdevs_operational": 2, 00:17:53.007 "base_bdevs_list": [ 00:17:53.007 { 00:17:53.007 "name": "pt1", 00:17:53.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.007 "is_configured": true, 00:17:53.007 "data_offset": 256, 00:17:53.007 "data_size": 7936 00:17:53.007 }, 00:17:53.007 { 00:17:53.007 "name": "pt2", 00:17:53.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.007 "is_configured": true, 00:17:53.007 "data_offset": 256, 00:17:53.007 "data_size": 7936 00:17:53.007 } 00:17:53.007 ] 00:17:53.007 }' 00:17:53.007 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.007 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:53.265 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:53.523 [2024-06-10 10:21:58.956901] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.523 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:53.523 "name": "raid_bdev1", 00:17:53.523 "aliases": [ 00:17:53.523 "41f7f4fc-2713-11ef-b084-113036b5c18d" 00:17:53.523 ], 00:17:53.523 "product_name": "Raid Volume", 00:17:53.523 "block_size": 4128, 00:17:53.523 "num_blocks": 7936, 00:17:53.523 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:53.523 "md_size": 32, 00:17:53.523 "md_interleave": true, 00:17:53.523 "dif_type": 0, 00:17:53.523 "assigned_rate_limits": { 00:17:53.523 "rw_ios_per_sec": 0, 00:17:53.523 "rw_mbytes_per_sec": 0, 00:17:53.523 "r_mbytes_per_sec": 0, 00:17:53.523 "w_mbytes_per_sec": 0 00:17:53.523 }, 00:17:53.523 "claimed": false, 00:17:53.523 "zoned": false, 00:17:53.523 "supported_io_types": { 00:17:53.523 "read": true, 00:17:53.523 "write": true, 00:17:53.523 "unmap": false, 00:17:53.523 "write_zeroes": true, 00:17:53.523 "flush": false, 00:17:53.523 "reset": true, 00:17:53.523 "compare": false, 00:17:53.523 "compare_and_write": false, 00:17:53.523 "abort": false, 00:17:53.523 "nvme_admin": false, 00:17:53.523 "nvme_io": false 00:17:53.523 }, 00:17:53.523 "memory_domains": [ 00:17:53.523 { 00:17:53.523 "dma_device_id": "system", 00:17:53.523 "dma_device_type": 1 00:17:53.523 }, 00:17:53.523 { 00:17:53.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.523 "dma_device_type": 2 00:17:53.523 }, 00:17:53.523 { 00:17:53.523 "dma_device_id": "system", 00:17:53.523 "dma_device_type": 1 00:17:53.523 }, 00:17:53.523 { 00:17:53.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.523 "dma_device_type": 2 00:17:53.523 } 00:17:53.523 ], 00:17:53.523 "driver_specific": { 00:17:53.523 "raid": { 00:17:53.523 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:53.523 "strip_size_kb": 0, 00:17:53.523 "state": "online", 00:17:53.523 "raid_level": "raid1", 00:17:53.523 "superblock": true, 00:17:53.523 "num_base_bdevs": 2, 00:17:53.523 "num_base_bdevs_discovered": 2, 00:17:53.523 "num_base_bdevs_operational": 2, 00:17:53.523 "base_bdevs_list": [ 00:17:53.523 { 00:17:53.523 "name": "pt1", 00:17:53.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.523 "is_configured": true, 00:17:53.523 "data_offset": 256, 00:17:53.523 "data_size": 7936 00:17:53.523 }, 00:17:53.523 { 00:17:53.524 "name": "pt2", 00:17:53.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.524 "is_configured": true, 00:17:53.524 "data_offset": 256, 00:17:53.524 "data_size": 7936 00:17:53.524 } 00:17:53.524 ] 00:17:53.524 } 00:17:53.524 } 00:17:53.524 }' 00:17:53.524 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.524 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:53.524 pt2' 00:17:53.524 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.524 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.524 10:21:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.782 "name": "pt1", 00:17:53.782 "aliases": [ 00:17:53.782 "00000000-0000-0000-0000-000000000001" 00:17:53.782 ], 00:17:53.782 "product_name": "passthru", 00:17:53.782 "block_size": 4128, 00:17:53.782 "num_blocks": 8192, 00:17:53.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.782 "md_size": 32, 00:17:53.782 "md_interleave": true, 00:17:53.782 "dif_type": 0, 00:17:53.782 "assigned_rate_limits": { 00:17:53.782 "rw_ios_per_sec": 0, 00:17:53.782 "rw_mbytes_per_sec": 0, 00:17:53.782 "r_mbytes_per_sec": 0, 00:17:53.782 "w_mbytes_per_sec": 0 00:17:53.782 }, 00:17:53.782 "claimed": true, 00:17:53.782 "claim_type": "exclusive_write", 00:17:53.782 "zoned": false, 00:17:53.782 "supported_io_types": { 00:17:53.782 "read": true, 00:17:53.782 "write": true, 00:17:53.782 "unmap": true, 00:17:53.782 "write_zeroes": true, 00:17:53.782 "flush": true, 00:17:53.782 "reset": true, 00:17:53.782 "compare": false, 00:17:53.782 "compare_and_write": false, 00:17:53.782 "abort": true, 00:17:53.782 "nvme_admin": false, 00:17:53.782 "nvme_io": false 00:17:53.782 }, 00:17:53.782 "memory_domains": [ 00:17:53.782 { 00:17:53.782 "dma_device_id": "system", 00:17:53.782 "dma_device_type": 1 00:17:53.782 }, 00:17:53.782 { 00:17:53.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.782 "dma_device_type": 2 00:17:53.782 } 00:17:53.782 ], 00:17:53.782 "driver_specific": { 00:17:53.782 "passthru": { 00:17:53.782 "name": "pt1", 00:17:53.782 "base_bdev_name": "malloc1" 00:17:53.782 } 00:17:53.782 } 00:17:53.782 }' 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:53.782 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.783 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:53.783 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.041 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.041 "name": "pt2", 00:17:54.041 "aliases": [ 00:17:54.041 "00000000-0000-0000-0000-000000000002" 00:17:54.041 ], 00:17:54.041 "product_name": "passthru", 00:17:54.041 "block_size": 4128, 00:17:54.041 "num_blocks": 8192, 00:17:54.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.041 "md_size": 32, 00:17:54.041 "md_interleave": true, 00:17:54.041 "dif_type": 0, 00:17:54.041 "assigned_rate_limits": { 00:17:54.041 "rw_ios_per_sec": 0, 00:17:54.041 "rw_mbytes_per_sec": 0, 00:17:54.041 "r_mbytes_per_sec": 0, 00:17:54.041 "w_mbytes_per_sec": 0 00:17:54.041 }, 00:17:54.041 "claimed": true, 00:17:54.041 "claim_type": "exclusive_write", 00:17:54.041 "zoned": false, 00:17:54.041 "supported_io_types": { 00:17:54.041 "read": true, 00:17:54.041 "write": true, 00:17:54.041 "unmap": true, 00:17:54.041 "write_zeroes": true, 00:17:54.041 "flush": true, 00:17:54.041 "reset": true, 00:17:54.041 "compare": false, 00:17:54.041 "compare_and_write": false, 00:17:54.041 "abort": true, 00:17:54.041 "nvme_admin": false, 00:17:54.041 "nvme_io": false 00:17:54.041 }, 00:17:54.041 "memory_domains": [ 00:17:54.041 { 00:17:54.041 "dma_device_id": "system", 00:17:54.041 "dma_device_type": 1 00:17:54.041 }, 00:17:54.041 { 00:17:54.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.041 "dma_device_type": 2 00:17:54.041 } 00:17:54.041 ], 00:17:54.041 "driver_specific": { 00:17:54.041 "passthru": { 00:17:54.041 "name": "pt2", 00:17:54.041 "base_bdev_name": "malloc2" 00:17:54.041 } 00:17:54.041 } 00:17:54.041 }' 00:17:54.041 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.041 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.041 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:54.041 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.041 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:54.300 [2024-06-10 10:21:59.884939] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 41f7f4fc-2713-11ef-b084-113036b5c18d '!=' 41f7f4fc-2713-11ef-b084-113036b5c18d ']' 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:54.300 10:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:54.866 [2024-06-10 10:22:00.168926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.866 "name": "raid_bdev1", 00:17:54.866 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:54.866 "strip_size_kb": 0, 00:17:54.866 "state": "online", 00:17:54.866 "raid_level": "raid1", 00:17:54.866 "superblock": true, 00:17:54.866 "num_base_bdevs": 2, 00:17:54.866 "num_base_bdevs_discovered": 1, 00:17:54.866 "num_base_bdevs_operational": 1, 00:17:54.866 "base_bdevs_list": [ 00:17:54.866 { 00:17:54.866 "name": null, 00:17:54.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.866 "is_configured": false, 00:17:54.866 "data_offset": 256, 00:17:54.866 "data_size": 7936 00:17:54.866 }, 00:17:54.866 { 00:17:54.866 "name": "pt2", 00:17:54.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.866 "is_configured": true, 00:17:54.866 "data_offset": 256, 00:17:54.866 "data_size": 7936 00:17:54.866 } 00:17:54.866 ] 00:17:54.866 }' 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.866 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.124 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:55.381 [2024-06-10 10:22:00.960998] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.381 [2024-06-10 10:22:00.961022] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.381 [2024-06-10 10:22:00.961044] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.381 [2024-06-10 10:22:00.961056] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.381 [2024-06-10 10:22:00.961060] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd24180 name raid_bdev1, state offline 00:17:55.381 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:55.381 10:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:17:56.003 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.262 [2024-06-10 10:22:01.701017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.262 [2024-06-10 10:22:01.701070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.262 [2024-06-10 10:22:01.701081] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd23f00 00:17:56.262 [2024-06-10 10:22:01.701089] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.262 [2024-06-10 10:22:01.701544] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.262 [2024-06-10 10:22:01.701574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.262 [2024-06-10 10:22:01.701592] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:56.262 [2024-06-10 10:22:01.701603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.262 [2024-06-10 10:22:01.701620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd24180 00:17:56.262 [2024-06-10 10:22:01.701623] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:56.262 [2024-06-10 10:22:01.701642] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cd86e20 00:17:56.262 [2024-06-10 10:22:01.701653] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd24180 00:17:56.262 [2024-06-10 10:22:01.701657] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd24180 00:17:56.262 [2024-06-10 10:22:01.701666] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.262 pt2 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.262 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.521 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.521 "name": "raid_bdev1", 00:17:56.521 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:56.521 "strip_size_kb": 0, 00:17:56.521 "state": "online", 00:17:56.521 "raid_level": "raid1", 00:17:56.521 "superblock": true, 00:17:56.521 "num_base_bdevs": 2, 00:17:56.521 "num_base_bdevs_discovered": 1, 00:17:56.521 "num_base_bdevs_operational": 1, 00:17:56.521 "base_bdevs_list": [ 00:17:56.521 { 00:17:56.521 "name": null, 00:17:56.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.521 "is_configured": false, 00:17:56.521 "data_offset": 256, 00:17:56.521 "data_size": 7936 00:17:56.521 }, 00:17:56.521 { 00:17:56.521 "name": "pt2", 00:17:56.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.521 "is_configured": true, 00:17:56.521 "data_offset": 256, 00:17:56.521 "data_size": 7936 00:17:56.521 } 00:17:56.521 ] 00:17:56.521 }' 00:17:56.521 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.521 10:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.779 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:57.037 [2024-06-10 10:22:02.417038] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.037 [2024-06-10 10:22:02.417065] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.037 [2024-06-10 10:22:02.417086] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.037 [2024-06-10 10:22:02.417097] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.037 [2024-06-10 10:22:02.417102] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd24180 name raid_bdev1, state offline 00:17:57.037 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.037 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:57.296 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:57.296 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:57.296 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:57.296 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.555 [2024-06-10 10:22:02.929099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.555 [2024-06-10 10:22:02.929183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.555 [2024-06-10 10:22:02.929206] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cd23c80 00:17:57.555 [2024-06-10 10:22:02.929224] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.555 [2024-06-10 10:22:02.929841] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.555 [2024-06-10 10:22:02.929896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.555 [2024-06-10 10:22:02.929933] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.555 [2024-06-10 10:22:02.929956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.555 [2024-06-10 10:22:02.929992] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:57.555 [2024-06-10 10:22:02.930001] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.555 [2024-06-10 10:22:02.930013] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd23780 name raid_bdev1, state configuring 00:17:57.555 [2024-06-10 10:22:02.930033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.555 [2024-06-10 10:22:02.930058] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cd23780 00:17:57.555 [2024-06-10 10:22:02.930066] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:57.555 [2024-06-10 10:22:02.930107] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cd86e20 00:17:57.555 [2024-06-10 10:22:02.930127] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cd23780 00:17:57.555 [2024-06-10 10:22:02.930137] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82cd23780 00:17:57.555 [2024-06-10 10:22:02.930168] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.555 pt1 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.555 10:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.814 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.814 "name": "raid_bdev1", 00:17:57.814 "uuid": "41f7f4fc-2713-11ef-b084-113036b5c18d", 00:17:57.814 "strip_size_kb": 0, 00:17:57.814 "state": "online", 00:17:57.814 "raid_level": "raid1", 00:17:57.814 "superblock": true, 00:17:57.814 "num_base_bdevs": 2, 00:17:57.814 "num_base_bdevs_discovered": 1, 00:17:57.814 "num_base_bdevs_operational": 1, 00:17:57.814 "base_bdevs_list": [ 00:17:57.814 { 00:17:57.814 "name": null, 00:17:57.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.814 "is_configured": false, 00:17:57.814 "data_offset": 256, 00:17:57.814 "data_size": 7936 00:17:57.814 }, 00:17:57.814 { 00:17:57.814 "name": "pt2", 00:17:57.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.814 "is_configured": true, 00:17:57.814 "data_offset": 256, 00:17:57.814 "data_size": 7936 00:17:57.814 } 00:17:57.814 ] 00:17:57.814 }' 00:17:57.814 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.814 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.072 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:58.072 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:58.348 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:58.348 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:58.348 10:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:58.638 [2024-06-10 10:22:04.021198] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 41f7f4fc-2713-11ef-b084-113036b5c18d '!=' 41f7f4fc-2713-11ef-b084-113036b5c18d ']' 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 67936 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 67936 ']' 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 67936 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # ps -c -o command 67936 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # tail -1 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # process_name=bdev_svc 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' bdev_svc = sudo ']' 00:17:58.638 killing process with pid 67936 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 67936' 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # kill 67936 00:17:58.638 [2024-06-10 10:22:04.054774] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.638 [2024-06-10 10:22:04.054816] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.638 [2024-06-10 10:22:04.054830] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.638 [2024-06-10 10:22:04.054835] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cd23780 name raid_bdev1, state offline 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # wait 67936 00:17:58.638 [2024-06-10 10:22:04.064655] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:17:58.638 00:17:58.638 real 0m12.912s 00:17:58.638 user 0m22.865s 00:17:58.638 sys 0m2.242s 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:58.638 10:22:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.638 ************************************ 00:17:58.638 END TEST raid_superblock_test_md_interleaved 00:17:58.638 ************************************ 00:17:58.897 10:22:04 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:58.897 10:22:04 bdev_raid -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:17:58.897 10:22:04 bdev_raid -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:58.897 10:22:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.897 ************************************ 00:17:58.897 START TEST raid_rebuild_test_sb_md_interleaved 00:17:58.897 ************************************ 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # raid_rebuild_test raid1 2 true false false 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=68323 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 68323 /var/tmp/spdk-raid.sock 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@830 -- # '[' -z 68323 ']' 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:58.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:58.897 10:22:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.897 [2024-06-10 10:22:04.289892] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:17:58.897 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:58.897 Zero copy mechanism will not be used. 00:17:58.897 [2024-06-10 10:22:04.290139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:17:59.464 EAL: TSC is not safe to use in SMP mode 00:17:59.464 EAL: TSC is not invariant 00:17:59.464 [2024-06-10 10:22:04.787070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.464 [2024-06-10 10:22:04.867616] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:59.464 [2024-06-10 10:22:04.869908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.464 [2024-06-10 10:22:04.870674] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.464 [2024-06-10 10:22:04.870689] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:59.723 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:59.723 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@863 -- # return 0 00:17:59.723 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:17:59.723 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:00.289 BaseBdev1_malloc 00:18:00.289 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:00.289 [2024-06-10 10:22:05.821361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:00.289 [2024-06-10 10:22:05.821428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.289 [2024-06-10 10:22:05.822023] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4d8780 00:18:00.289 [2024-06-10 10:22:05.822062] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.289 [2024-06-10 10:22:05.822782] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.289 [2024-06-10 10:22:05.822813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:00.289 BaseBdev1 00:18:00.289 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:00.289 10:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:00.549 BaseBdev2_malloc 00:18:00.549 10:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:00.819 [2024-06-10 10:22:06.357372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:00.819 [2024-06-10 10:22:06.357430] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.819 [2024-06-10 10:22:06.357457] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4d8c80 00:18:00.819 [2024-06-10 10:22:06.357464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.819 [2024-06-10 10:22:06.357925] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.819 [2024-06-10 10:22:06.357954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:00.819 BaseBdev2 00:18:00.819 10:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:01.129 spare_malloc 00:18:01.129 10:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:01.389 spare_delay 00:18:01.389 10:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:01.647 [2024-06-10 10:22:07.033414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.647 [2024-06-10 10:22:07.033479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.648 [2024-06-10 10:22:07.033514] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4d9400 00:18:01.648 [2024-06-10 10:22:07.033531] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.648 [2024-06-10 10:22:07.034041] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.648 [2024-06-10 10:22:07.034082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.648 spare 00:18:01.648 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:01.906 [2024-06-10 10:22:07.309427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.906 [2024-06-10 10:22:07.309896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.906 [2024-06-10 10:22:07.309956] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a4d9680 00:18:01.906 [2024-06-10 10:22:07.309961] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:01.906 [2024-06-10 10:22:07.309993] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a53be20 00:18:01.906 [2024-06-10 10:22:07.310006] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a4d9680 00:18:01.906 [2024-06-10 10:22:07.310009] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a4d9680 00:18:01.906 [2024-06-10 10:22:07.310021] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.906 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.165 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.165 "name": "raid_bdev1", 00:18:02.165 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:02.165 "strip_size_kb": 0, 00:18:02.165 "state": "online", 00:18:02.165 "raid_level": "raid1", 00:18:02.165 "superblock": true, 00:18:02.165 "num_base_bdevs": 2, 00:18:02.165 "num_base_bdevs_discovered": 2, 00:18:02.165 "num_base_bdevs_operational": 2, 00:18:02.165 "base_bdevs_list": [ 00:18:02.165 { 00:18:02.165 "name": "BaseBdev1", 00:18:02.165 "uuid": "b6c31aac-21c7-7657-82f0-fbfa6d4f4215", 00:18:02.165 "is_configured": true, 00:18:02.165 "data_offset": 256, 00:18:02.165 "data_size": 7936 00:18:02.165 }, 00:18:02.165 { 00:18:02.165 "name": "BaseBdev2", 00:18:02.165 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:02.165 "is_configured": true, 00:18:02.165 "data_offset": 256, 00:18:02.165 "data_size": 7936 00:18:02.165 } 00:18:02.165 ] 00:18:02.165 }' 00:18:02.165 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.165 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.424 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:02.424 10:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:02.682 [2024-06-10 10:22:08.049504] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:18:02.683 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:02.941 [2024-06-10 10:22:08.473440] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.941 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.941 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.942 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.201 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.201 "name": "raid_bdev1", 00:18:03.201 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:03.201 "strip_size_kb": 0, 00:18:03.201 "state": "online", 00:18:03.201 "raid_level": "raid1", 00:18:03.201 "superblock": true, 00:18:03.201 "num_base_bdevs": 2, 00:18:03.201 "num_base_bdevs_discovered": 1, 00:18:03.201 "num_base_bdevs_operational": 1, 00:18:03.201 "base_bdevs_list": [ 00:18:03.201 { 00:18:03.201 "name": null, 00:18:03.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.201 "is_configured": false, 00:18:03.201 "data_offset": 256, 00:18:03.201 "data_size": 7936 00:18:03.201 }, 00:18:03.201 { 00:18:03.201 "name": "BaseBdev2", 00:18:03.201 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:03.201 "is_configured": true, 00:18:03.201 "data_offset": 256, 00:18:03.201 "data_size": 7936 00:18:03.201 } 00:18:03.201 ] 00:18:03.201 }' 00:18:03.201 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.201 10:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.460 10:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.718 [2024-06-10 10:22:09.309495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.718 [2024-06-10 10:22:09.309634] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a53bec0 00:18:03.718 [2024-06-10 10:22:09.310397] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:03.976 10:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:04.910 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.911 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:04.911 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:04.911 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:04.911 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:04.911 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.911 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.169 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:05.169 "name": "raid_bdev1", 00:18:05.169 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:05.169 "strip_size_kb": 0, 00:18:05.169 "state": "online", 00:18:05.169 "raid_level": "raid1", 00:18:05.169 "superblock": true, 00:18:05.169 "num_base_bdevs": 2, 00:18:05.169 "num_base_bdevs_discovered": 2, 00:18:05.169 "num_base_bdevs_operational": 2, 00:18:05.169 "process": { 00:18:05.169 "type": "rebuild", 00:18:05.169 "target": "spare", 00:18:05.169 "progress": { 00:18:05.169 "blocks": 3072, 00:18:05.169 "percent": 38 00:18:05.169 } 00:18:05.169 }, 00:18:05.169 "base_bdevs_list": [ 00:18:05.169 { 00:18:05.169 "name": "spare", 00:18:05.169 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:05.169 "is_configured": true, 00:18:05.169 "data_offset": 256, 00:18:05.169 "data_size": 7936 00:18:05.169 }, 00:18:05.169 { 00:18:05.169 "name": "BaseBdev2", 00:18:05.169 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:05.169 "is_configured": true, 00:18:05.169 "data_offset": 256, 00:18:05.169 "data_size": 7936 00:18:05.169 } 00:18:05.169 ] 00:18:05.169 }' 00:18:05.169 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:05.169 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.169 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:05.169 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.169 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:05.428 [2024-06-10 10:22:10.893842] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.428 [2024-06-10 10:22:10.917094] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:05.428 [2024-06-10 10:22:10.917147] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.428 [2024-06-10 10:22:10.917153] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.428 [2024-06-10 10:22:10.917157] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.428 10:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.686 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.686 "name": "raid_bdev1", 00:18:05.686 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:05.686 "strip_size_kb": 0, 00:18:05.686 "state": "online", 00:18:05.686 "raid_level": "raid1", 00:18:05.686 "superblock": true, 00:18:05.686 "num_base_bdevs": 2, 00:18:05.686 "num_base_bdevs_discovered": 1, 00:18:05.686 "num_base_bdevs_operational": 1, 00:18:05.686 "base_bdevs_list": [ 00:18:05.686 { 00:18:05.686 "name": null, 00:18:05.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.686 "is_configured": false, 00:18:05.686 "data_offset": 256, 00:18:05.686 "data_size": 7936 00:18:05.686 }, 00:18:05.686 { 00:18:05.686 "name": "BaseBdev2", 00:18:05.686 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:05.686 "is_configured": true, 00:18:05.686 "data_offset": 256, 00:18:05.686 "data_size": 7936 00:18:05.686 } 00:18:05.686 ] 00:18:05.686 }' 00:18:05.687 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.687 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.945 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.204 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.204 "name": "raid_bdev1", 00:18:06.204 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:06.204 "strip_size_kb": 0, 00:18:06.204 "state": "online", 00:18:06.204 "raid_level": "raid1", 00:18:06.204 "superblock": true, 00:18:06.204 "num_base_bdevs": 2, 00:18:06.204 "num_base_bdevs_discovered": 1, 00:18:06.204 "num_base_bdevs_operational": 1, 00:18:06.204 "base_bdevs_list": [ 00:18:06.204 { 00:18:06.204 "name": null, 00:18:06.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.204 "is_configured": false, 00:18:06.204 "data_offset": 256, 00:18:06.204 "data_size": 7936 00:18:06.204 }, 00:18:06.204 { 00:18:06.204 "name": "BaseBdev2", 00:18:06.204 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:06.204 "is_configured": true, 00:18:06.204 "data_offset": 256, 00:18:06.204 "data_size": 7936 00:18:06.204 } 00:18:06.204 ] 00:18:06.204 }' 00:18:06.204 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:06.204 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:06.204 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:06.204 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:06.204 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.463 [2024-06-10 10:22:11.923578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.463 [2024-06-10 10:22:11.923699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a53be20 00:18:06.463 [2024-06-10 10:22:11.924360] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.463 10:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.400 10:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.660 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.660 "name": "raid_bdev1", 00:18:07.660 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:07.660 "strip_size_kb": 0, 00:18:07.660 "state": "online", 00:18:07.660 "raid_level": "raid1", 00:18:07.660 "superblock": true, 00:18:07.660 "num_base_bdevs": 2, 00:18:07.660 "num_base_bdevs_discovered": 2, 00:18:07.660 "num_base_bdevs_operational": 2, 00:18:07.660 "process": { 00:18:07.660 "type": "rebuild", 00:18:07.660 "target": "spare", 00:18:07.660 "progress": { 00:18:07.660 "blocks": 3328, 00:18:07.660 "percent": 41 00:18:07.660 } 00:18:07.660 }, 00:18:07.660 "base_bdevs_list": [ 00:18:07.660 { 00:18:07.660 "name": "spare", 00:18:07.660 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:07.660 "is_configured": true, 00:18:07.660 "data_offset": 256, 00:18:07.660 "data_size": 7936 00:18:07.660 }, 00:18:07.660 { 00:18:07.660 "name": "BaseBdev2", 00:18:07.660 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:07.660 "is_configured": true, 00:18:07.660 "data_offset": 256, 00:18:07.660 "data_size": 7936 00:18:07.660 } 00:18:07.660 ] 00:18:07.660 }' 00:18:07.660 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:18:07.919 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=727 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.919 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.178 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:08.178 "name": "raid_bdev1", 00:18:08.178 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:08.178 "strip_size_kb": 0, 00:18:08.178 "state": "online", 00:18:08.178 "raid_level": "raid1", 00:18:08.178 "superblock": true, 00:18:08.178 "num_base_bdevs": 2, 00:18:08.178 "num_base_bdevs_discovered": 2, 00:18:08.178 "num_base_bdevs_operational": 2, 00:18:08.178 "process": { 00:18:08.178 "type": "rebuild", 00:18:08.178 "target": "spare", 00:18:08.178 "progress": { 00:18:08.178 "blocks": 4096, 00:18:08.178 "percent": 51 00:18:08.178 } 00:18:08.178 }, 00:18:08.178 "base_bdevs_list": [ 00:18:08.178 { 00:18:08.178 "name": "spare", 00:18:08.178 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:08.178 "is_configured": true, 00:18:08.178 "data_offset": 256, 00:18:08.178 "data_size": 7936 00:18:08.178 }, 00:18:08.178 { 00:18:08.178 "name": "BaseBdev2", 00:18:08.178 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:08.178 "is_configured": true, 00:18:08.178 "data_offset": 256, 00:18:08.178 "data_size": 7936 00:18:08.178 } 00:18:08.178 ] 00:18:08.178 }' 00:18:08.178 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:08.178 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:08.178 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:08.178 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.178 10:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.114 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.373 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.373 "name": "raid_bdev1", 00:18:09.373 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:09.373 "strip_size_kb": 0, 00:18:09.373 "state": "online", 00:18:09.373 "raid_level": "raid1", 00:18:09.373 "superblock": true, 00:18:09.373 "num_base_bdevs": 2, 00:18:09.373 "num_base_bdevs_discovered": 2, 00:18:09.373 "num_base_bdevs_operational": 2, 00:18:09.373 "process": { 00:18:09.373 "type": "rebuild", 00:18:09.373 "target": "spare", 00:18:09.373 "progress": { 00:18:09.373 "blocks": 7424, 00:18:09.373 "percent": 93 00:18:09.373 } 00:18:09.373 }, 00:18:09.373 "base_bdevs_list": [ 00:18:09.373 { 00:18:09.373 "name": "spare", 00:18:09.373 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:09.373 "is_configured": true, 00:18:09.373 "data_offset": 256, 00:18:09.373 "data_size": 7936 00:18:09.373 }, 00:18:09.373 { 00:18:09.373 "name": "BaseBdev2", 00:18:09.373 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:09.373 "is_configured": true, 00:18:09.373 "data_offset": 256, 00:18:09.373 "data_size": 7936 00:18:09.373 } 00:18:09.373 ] 00:18:09.373 }' 00:18:09.373 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:09.373 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.373 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:09.374 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.374 10:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:09.632 [2024-06-10 10:22:15.037283] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:09.632 [2024-06-10 10:22:15.037321] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:09.632 [2024-06-10 10:22:15.037377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.567 10:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.825 "name": "raid_bdev1", 00:18:10.825 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:10.825 "strip_size_kb": 0, 00:18:10.825 "state": "online", 00:18:10.825 "raid_level": "raid1", 00:18:10.825 "superblock": true, 00:18:10.825 "num_base_bdevs": 2, 00:18:10.825 "num_base_bdevs_discovered": 2, 00:18:10.825 "num_base_bdevs_operational": 2, 00:18:10.825 "base_bdevs_list": [ 00:18:10.825 { 00:18:10.825 "name": "spare", 00:18:10.825 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:10.825 "is_configured": true, 00:18:10.825 "data_offset": 256, 00:18:10.825 "data_size": 7936 00:18:10.825 }, 00:18:10.825 { 00:18:10.825 "name": "BaseBdev2", 00:18:10.825 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:10.825 "is_configured": true, 00:18:10.825 "data_offset": 256, 00:18:10.825 "data_size": 7936 00:18:10.825 } 00:18:10.825 ] 00:18:10.825 }' 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.825 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.083 "name": "raid_bdev1", 00:18:11.083 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:11.083 "strip_size_kb": 0, 00:18:11.083 "state": "online", 00:18:11.083 "raid_level": "raid1", 00:18:11.083 "superblock": true, 00:18:11.083 "num_base_bdevs": 2, 00:18:11.083 "num_base_bdevs_discovered": 2, 00:18:11.083 "num_base_bdevs_operational": 2, 00:18:11.083 "base_bdevs_list": [ 00:18:11.083 { 00:18:11.083 "name": "spare", 00:18:11.083 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:11.083 "is_configured": true, 00:18:11.083 "data_offset": 256, 00:18:11.083 "data_size": 7936 00:18:11.083 }, 00:18:11.083 { 00:18:11.083 "name": "BaseBdev2", 00:18:11.083 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:11.083 "is_configured": true, 00:18:11.083 "data_offset": 256, 00:18:11.083 "data_size": 7936 00:18:11.083 } 00:18:11.083 ] 00:18:11.083 }' 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.083 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.341 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.341 "name": "raid_bdev1", 00:18:11.341 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:11.341 "strip_size_kb": 0, 00:18:11.341 "state": "online", 00:18:11.341 "raid_level": "raid1", 00:18:11.341 "superblock": true, 00:18:11.341 "num_base_bdevs": 2, 00:18:11.341 "num_base_bdevs_discovered": 2, 00:18:11.341 "num_base_bdevs_operational": 2, 00:18:11.341 "base_bdevs_list": [ 00:18:11.341 { 00:18:11.341 "name": "spare", 00:18:11.341 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:11.341 "is_configured": true, 00:18:11.341 "data_offset": 256, 00:18:11.341 "data_size": 7936 00:18:11.341 }, 00:18:11.341 { 00:18:11.341 "name": "BaseBdev2", 00:18:11.341 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:11.341 "is_configured": true, 00:18:11.341 "data_offset": 256, 00:18:11.341 "data_size": 7936 00:18:11.341 } 00:18:11.341 ] 00:18:11.341 }' 00:18:11.341 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.341 10:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.906 10:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:12.164 [2024-06-10 10:22:17.684614] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.165 [2024-06-10 10:22:17.684645] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.165 [2024-06-10 10:22:17.684667] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.165 [2024-06-10 10:22:17.684682] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.165 [2024-06-10 10:22:17.684687] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a4d9680 name raid_bdev1, state offline 00:18:12.165 10:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.165 10:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:18:12.424 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:12.424 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:18:12.424 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:18:12.424 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:12.683 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:13.249 [2024-06-10 10:22:18.608657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.249 [2024-06-10 10:22:18.608718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.249 [2024-06-10 10:22:18.608746] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4d9400 00:18:13.249 [2024-06-10 10:22:18.608769] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.249 [2024-06-10 10:22:18.609256] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.249 [2024-06-10 10:22:18.609284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.249 [2024-06-10 10:22:18.609307] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:13.249 [2024-06-10 10:22:18.609319] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.249 [2024-06-10 10:22:18.609340] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.249 spare 00:18:13.249 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.250 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.250 [2024-06-10 10:22:18.709366] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a4d9680 00:18:13.250 [2024-06-10 10:22:18.709392] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:13.250 [2024-06-10 10:22:18.709437] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a53be20 00:18:13.250 [2024-06-10 10:22:18.709463] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a4d9680 00:18:13.250 [2024-06-10 10:22:18.709467] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82a4d9680 00:18:13.250 [2024-06-10 10:22:18.709481] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.586 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.587 "name": "raid_bdev1", 00:18:13.587 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:13.587 "strip_size_kb": 0, 00:18:13.587 "state": "online", 00:18:13.587 "raid_level": "raid1", 00:18:13.587 "superblock": true, 00:18:13.587 "num_base_bdevs": 2, 00:18:13.587 "num_base_bdevs_discovered": 2, 00:18:13.587 "num_base_bdevs_operational": 2, 00:18:13.587 "base_bdevs_list": [ 00:18:13.587 { 00:18:13.587 "name": "spare", 00:18:13.587 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:13.587 "is_configured": true, 00:18:13.587 "data_offset": 256, 00:18:13.587 "data_size": 7936 00:18:13.587 }, 00:18:13.587 { 00:18:13.587 "name": "BaseBdev2", 00:18:13.587 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:13.587 "is_configured": true, 00:18:13.587 "data_offset": 256, 00:18:13.587 "data_size": 7936 00:18:13.587 } 00:18:13.587 ] 00:18:13.587 }' 00:18:13.587 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.587 10:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.846 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.846 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:13.847 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:13.847 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:13.847 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:13.847 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.847 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.106 "name": "raid_bdev1", 00:18:14.106 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:14.106 "strip_size_kb": 0, 00:18:14.106 "state": "online", 00:18:14.106 "raid_level": "raid1", 00:18:14.106 "superblock": true, 00:18:14.106 "num_base_bdevs": 2, 00:18:14.106 "num_base_bdevs_discovered": 2, 00:18:14.106 "num_base_bdevs_operational": 2, 00:18:14.106 "base_bdevs_list": [ 00:18:14.106 { 00:18:14.106 "name": "spare", 00:18:14.106 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:14.106 "is_configured": true, 00:18:14.106 "data_offset": 256, 00:18:14.106 "data_size": 7936 00:18:14.106 }, 00:18:14.106 { 00:18:14.106 "name": "BaseBdev2", 00:18:14.106 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:14.106 "is_configured": true, 00:18:14.106 "data_offset": 256, 00:18:14.106 "data_size": 7936 00:18:14.106 } 00:18:14.106 ] 00:18:14.106 }' 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.106 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:14.365 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.365 10:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:14.625 [2024-06-10 10:22:20.112732] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.625 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.884 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.884 "name": "raid_bdev1", 00:18:14.884 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:14.884 "strip_size_kb": 0, 00:18:14.884 "state": "online", 00:18:14.884 "raid_level": "raid1", 00:18:14.884 "superblock": true, 00:18:14.884 "num_base_bdevs": 2, 00:18:14.884 "num_base_bdevs_discovered": 1, 00:18:14.884 "num_base_bdevs_operational": 1, 00:18:14.884 "base_bdevs_list": [ 00:18:14.884 { 00:18:14.884 "name": null, 00:18:14.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.884 "is_configured": false, 00:18:14.884 "data_offset": 256, 00:18:14.884 "data_size": 7936 00:18:14.884 }, 00:18:14.884 { 00:18:14.884 "name": "BaseBdev2", 00:18:14.884 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:14.884 "is_configured": true, 00:18:14.884 "data_offset": 256, 00:18:14.884 "data_size": 7936 00:18:14.884 } 00:18:14.884 ] 00:18:14.884 }' 00:18:14.884 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.884 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.142 10:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.401 [2024-06-10 10:22:20.980789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.401 [2024-06-10 10:22:20.980863] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.401 [2024-06-10 10:22:20.980867] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.401 [2024-06-10 10:22:20.980915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.401 [2024-06-10 10:22:20.980984] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a53bec0 00:18:15.401 [2024-06-10 10:22:20.981436] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.401 10:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:16.817 "name": "raid_bdev1", 00:18:16.817 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:16.817 "strip_size_kb": 0, 00:18:16.817 "state": "online", 00:18:16.817 "raid_level": "raid1", 00:18:16.817 "superblock": true, 00:18:16.817 "num_base_bdevs": 2, 00:18:16.817 "num_base_bdevs_discovered": 2, 00:18:16.817 "num_base_bdevs_operational": 2, 00:18:16.817 "process": { 00:18:16.817 "type": "rebuild", 00:18:16.817 "target": "spare", 00:18:16.817 "progress": { 00:18:16.817 "blocks": 3328, 00:18:16.817 "percent": 41 00:18:16.817 } 00:18:16.817 }, 00:18:16.817 "base_bdevs_list": [ 00:18:16.817 { 00:18:16.817 "name": "spare", 00:18:16.817 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:16.817 "is_configured": true, 00:18:16.817 "data_offset": 256, 00:18:16.817 "data_size": 7936 00:18:16.817 }, 00:18:16.817 { 00:18:16.817 "name": "BaseBdev2", 00:18:16.817 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:16.817 "is_configured": true, 00:18:16.817 "data_offset": 256, 00:18:16.817 "data_size": 7936 00:18:16.817 } 00:18:16.817 ] 00:18:16.817 }' 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.817 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:17.075 [2024-06-10 10:22:22.584912] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.075 [2024-06-10 10:22:22.588091] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:17.075 [2024-06-10 10:22:22.588130] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.075 [2024-06-10 10:22:22.588135] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:17.075 [2024-06-10 10:22:22.588139] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.075 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.643 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.643 "name": "raid_bdev1", 00:18:17.643 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:17.643 "strip_size_kb": 0, 00:18:17.643 "state": "online", 00:18:17.643 "raid_level": "raid1", 00:18:17.643 "superblock": true, 00:18:17.643 "num_base_bdevs": 2, 00:18:17.643 "num_base_bdevs_discovered": 1, 00:18:17.643 "num_base_bdevs_operational": 1, 00:18:17.643 "base_bdevs_list": [ 00:18:17.643 { 00:18:17.643 "name": null, 00:18:17.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.643 "is_configured": false, 00:18:17.643 "data_offset": 256, 00:18:17.643 "data_size": 7936 00:18:17.643 }, 00:18:17.643 { 00:18:17.643 "name": "BaseBdev2", 00:18:17.643 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:17.643 "is_configured": true, 00:18:17.643 "data_offset": 256, 00:18:17.643 "data_size": 7936 00:18:17.643 } 00:18:17.643 ] 00:18:17.643 }' 00:18:17.643 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.643 10:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.902 10:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:17.902 [2024-06-10 10:22:23.458556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.902 [2024-06-10 10:22:23.458602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.902 [2024-06-10 10:22:23.458625] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4d9400 00:18:17.902 [2024-06-10 10:22:23.458632] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.903 [2024-06-10 10:22:23.458681] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.903 [2024-06-10 10:22:23.458688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.903 [2024-06-10 10:22:23.458702] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:17.903 [2024-06-10 10:22:23.458706] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:17.903 [2024-06-10 10:22:23.458709] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:17.903 [2024-06-10 10:22:23.458718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:17.903 [2024-06-10 10:22:23.458789] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a53be20 00:18:17.903 [2024-06-10 10:22:23.459215] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:17.903 spare 00:18:17.903 10:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.280 "name": "raid_bdev1", 00:18:19.280 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:19.280 "strip_size_kb": 0, 00:18:19.280 "state": "online", 00:18:19.280 "raid_level": "raid1", 00:18:19.280 "superblock": true, 00:18:19.280 "num_base_bdevs": 2, 00:18:19.280 "num_base_bdevs_discovered": 2, 00:18:19.280 "num_base_bdevs_operational": 2, 00:18:19.280 "process": { 00:18:19.280 "type": "rebuild", 00:18:19.280 "target": "spare", 00:18:19.280 "progress": { 00:18:19.280 "blocks": 3328, 00:18:19.280 "percent": 41 00:18:19.280 } 00:18:19.280 }, 00:18:19.280 "base_bdevs_list": [ 00:18:19.280 { 00:18:19.280 "name": "spare", 00:18:19.280 "uuid": "420b51a2-23ae-9d56-a54d-b96e7e2ef973", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 256, 00:18:19.280 "data_size": 7936 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev2", 00:18:19.280 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 256, 00:18:19.280 "data_size": 7936 00:18:19.280 } 00:18:19.280 ] 00:18:19.280 }' 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.280 10:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:19.539 [2024-06-10 10:22:25.078975] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.797 [2024-06-10 10:22:25.166118] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:19.797 [2024-06-10 10:22:25.166178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.797 [2024-06-10 10:22:25.166183] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.797 [2024-06-10 10:22:25.166186] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.797 "name": "raid_bdev1", 00:18:19.797 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:19.797 "strip_size_kb": 0, 00:18:19.797 "state": "online", 00:18:19.797 "raid_level": "raid1", 00:18:19.797 "superblock": true, 00:18:19.797 "num_base_bdevs": 2, 00:18:19.797 "num_base_bdevs_discovered": 1, 00:18:19.797 "num_base_bdevs_operational": 1, 00:18:19.797 "base_bdevs_list": [ 00:18:19.797 { 00:18:19.797 "name": null, 00:18:19.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.797 "is_configured": false, 00:18:19.797 "data_offset": 256, 00:18:19.797 "data_size": 7936 00:18:19.797 }, 00:18:19.797 { 00:18:19.797 "name": "BaseBdev2", 00:18:19.797 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:19.797 "is_configured": true, 00:18:19.797 "data_offset": 256, 00:18:19.797 "data_size": 7936 00:18:19.797 } 00:18:19.797 ] 00:18:19.797 }' 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.797 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.362 "name": "raid_bdev1", 00:18:20.362 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:20.362 "strip_size_kb": 0, 00:18:20.362 "state": "online", 00:18:20.362 "raid_level": "raid1", 00:18:20.362 "superblock": true, 00:18:20.362 "num_base_bdevs": 2, 00:18:20.362 "num_base_bdevs_discovered": 1, 00:18:20.362 "num_base_bdevs_operational": 1, 00:18:20.362 "base_bdevs_list": [ 00:18:20.362 { 00:18:20.362 "name": null, 00:18:20.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.362 "is_configured": false, 00:18:20.362 "data_offset": 256, 00:18:20.362 "data_size": 7936 00:18:20.362 }, 00:18:20.362 { 00:18:20.362 "name": "BaseBdev2", 00:18:20.362 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:20.362 "is_configured": true, 00:18:20.362 "data_offset": 256, 00:18:20.362 "data_size": 7936 00:18:20.362 } 00:18:20.362 ] 00:18:20.362 }' 00:18:20.362 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:20.363 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:20.363 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:20.363 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:20.363 10:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:20.620 10:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.879 [2024-06-10 10:22:26.416622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.879 [2024-06-10 10:22:26.416666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.879 [2024-06-10 10:22:26.416690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a4d8780 00:18:20.879 [2024-06-10 10:22:26.416697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.879 [2024-06-10 10:22:26.416744] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.879 [2024-06-10 10:22:26.416752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.879 [2024-06-10 10:22:26.416766] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:20.879 [2024-06-10 10:22:26.416771] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.879 [2024-06-10 10:22:26.416774] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:20.879 BaseBdev1 00:18:20.879 10:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:18:22.265 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.266 "name": "raid_bdev1", 00:18:22.266 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:22.266 "strip_size_kb": 0, 00:18:22.266 "state": "online", 00:18:22.266 "raid_level": "raid1", 00:18:22.266 "superblock": true, 00:18:22.266 "num_base_bdevs": 2, 00:18:22.266 "num_base_bdevs_discovered": 1, 00:18:22.266 "num_base_bdevs_operational": 1, 00:18:22.266 "base_bdevs_list": [ 00:18:22.266 { 00:18:22.266 "name": null, 00:18:22.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.266 "is_configured": false, 00:18:22.266 "data_offset": 256, 00:18:22.266 "data_size": 7936 00:18:22.266 }, 00:18:22.266 { 00:18:22.266 "name": "BaseBdev2", 00:18:22.266 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:22.266 "is_configured": true, 00:18:22.266 "data_offset": 256, 00:18:22.266 "data_size": 7936 00:18:22.266 } 00:18:22.266 ] 00:18:22.266 }' 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.266 10:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.523 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.781 "name": "raid_bdev1", 00:18:22.781 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:22.781 "strip_size_kb": 0, 00:18:22.781 "state": "online", 00:18:22.781 "raid_level": "raid1", 00:18:22.781 "superblock": true, 00:18:22.781 "num_base_bdevs": 2, 00:18:22.781 "num_base_bdevs_discovered": 1, 00:18:22.781 "num_base_bdevs_operational": 1, 00:18:22.781 "base_bdevs_list": [ 00:18:22.781 { 00:18:22.781 "name": null, 00:18:22.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.781 "is_configured": false, 00:18:22.781 "data_offset": 256, 00:18:22.781 "data_size": 7936 00:18:22.781 }, 00:18:22.781 { 00:18:22.781 "name": "BaseBdev2", 00:18:22.781 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:22.781 "is_configured": true, 00:18:22.781 "data_offset": 256, 00:18:22.781 "data_size": 7936 00:18:22.781 } 00:18:22.781 ] 00:18:22.781 }' 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@649 -- # local es=0 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@637 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@643 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:22.781 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:23.038 [2024-06-10 10:22:28.524729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.038 [2024-06-10 10:22:28.524783] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:23.038 [2024-06-10 10:22:28.524787] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:23.038 request: 00:18:23.038 { 00:18:23.038 "raid_bdev": "raid_bdev1", 00:18:23.038 "base_bdev": "BaseBdev1", 00:18:23.038 "method": "bdev_raid_add_base_bdev", 00:18:23.038 "req_id": 1 00:18:23.038 } 00:18:23.038 Got JSON-RPC error response 00:18:23.038 response: 00:18:23.038 { 00:18:23.038 "code": -22, 00:18:23.038 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:23.038 } 00:18:23.038 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # es=1 00:18:23.038 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:23.038 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:23.038 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:23.038 10:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:18:24.488 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.488 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:24.488 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.489 "name": "raid_bdev1", 00:18:24.489 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:24.489 "strip_size_kb": 0, 00:18:24.489 "state": "online", 00:18:24.489 "raid_level": "raid1", 00:18:24.489 "superblock": true, 00:18:24.489 "num_base_bdevs": 2, 00:18:24.489 "num_base_bdevs_discovered": 1, 00:18:24.489 "num_base_bdevs_operational": 1, 00:18:24.489 "base_bdevs_list": [ 00:18:24.489 { 00:18:24.489 "name": null, 00:18:24.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.489 "is_configured": false, 00:18:24.489 "data_offset": 256, 00:18:24.489 "data_size": 7936 00:18:24.489 }, 00:18:24.489 { 00:18:24.489 "name": "BaseBdev2", 00:18:24.489 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:24.489 "is_configured": true, 00:18:24.489 "data_offset": 256, 00:18:24.489 "data_size": 7936 00:18:24.489 } 00:18:24.489 ] 00:18:24.489 }' 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.489 10:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.792 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.050 "name": "raid_bdev1", 00:18:25.050 "uuid": "4a1da80b-2713-11ef-b084-113036b5c18d", 00:18:25.050 "strip_size_kb": 0, 00:18:25.050 "state": "online", 00:18:25.050 "raid_level": "raid1", 00:18:25.050 "superblock": true, 00:18:25.050 "num_base_bdevs": 2, 00:18:25.050 "num_base_bdevs_discovered": 1, 00:18:25.050 "num_base_bdevs_operational": 1, 00:18:25.050 "base_bdevs_list": [ 00:18:25.050 { 00:18:25.050 "name": null, 00:18:25.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.050 "is_configured": false, 00:18:25.050 "data_offset": 256, 00:18:25.050 "data_size": 7936 00:18:25.050 }, 00:18:25.050 { 00:18:25.050 "name": "BaseBdev2", 00:18:25.050 "uuid": "71f4b794-7145-4159-9ea4-10292f925e2d", 00:18:25.050 "is_configured": true, 00:18:25.050 "data_offset": 256, 00:18:25.050 "data_size": 7936 00:18:25.050 } 00:18:25.050 ] 00:18:25.050 }' 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 68323 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@949 -- # '[' -z 68323 ']' 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # kill -0 68323 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # uname 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:18:25.050 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # ps -c -o command 68323 00:18:25.051 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # tail -1 00:18:25.051 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # process_name=bdevperf 00:18:25.051 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' bdevperf = sudo ']' 00:18:25.051 killing process with pid 68323 00:18:25.051 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # echo 'killing process with pid 68323' 00:18:25.051 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # kill 68323 00:18:25.051 Received shutdown signal, test time was about 60.000000 seconds 00:18:25.051 00:18:25.051 Latency(us) 00:18:25.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.051 =================================================================================================================== 00:18:25.051 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.051 [2024-06-10 10:22:30.493121] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.051 [2024-06-10 10:22:30.493155] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.051 [2024-06-10 10:22:30.493166] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.051 [2024-06-10 10:22:30.493171] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a4d9680 name raid_bdev1, state offline 00:18:25.051 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # wait 68323 00:18:25.051 [2024-06-10 10:22:30.507718] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.309 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:18:25.309 00:18:25.309 real 0m26.401s 00:18:25.309 user 0m40.826s 00:18:25.309 sys 0m2.667s 00:18:25.309 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:25.309 10:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.309 ************************************ 00:18:25.309 END TEST raid_rebuild_test_sb_md_interleaved 00:18:25.309 ************************************ 00:18:25.309 10:22:30 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:18:25.309 10:22:30 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:18:25.309 10:22:30 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 68323 ']' 00:18:25.309 10:22:30 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 68323 00:18:25.309 10:22:30 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:18:25.309 00:18:25.309 real 11m54.523s 00:18:25.309 user 20m55.392s 00:18:25.309 sys 1m45.102s 00:18:25.309 10:22:30 bdev_raid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:25.309 10:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.309 ************************************ 00:18:25.309 END TEST bdev_raid 00:18:25.309 ************************************ 00:18:25.309 10:22:30 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:25.309 10:22:30 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:18:25.309 10:22:30 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:25.309 10:22:30 -- common/autotest_common.sh@10 -- # set +x 00:18:25.309 ************************************ 00:18:25.309 START TEST bdevperf_config 00:18:25.309 ************************************ 00:18:25.309 10:22:30 bdevperf_config -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:25.569 * Looking for test storage... 00:18:25.569 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:25.569 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:25.569 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:25.569 10:22:30 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:25.569 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:25.569 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:25.569 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:25.569 10:22:31 bdevperf_config -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:28.854 10:22:33 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-10 10:22:31.016450] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:28.854 [2024-06-10 10:22:31.016654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:28.854 Using job config with 4 jobs 00:18:28.854 EAL: TSC is not safe to use in SMP mode 00:18:28.854 EAL: TSC is not invariant 00:18:28.854 [2024-06-10 10:22:31.527115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.854 [2024-06-10 10:22:31.610190] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:28.854 [2024-06-10 10:22:31.612454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.854 cpumask for '\''job0'\'' is too big 00:18:28.854 cpumask for '\''job1'\'' is too big 00:18:28.854 cpumask for '\''job2'\'' is too big 00:18:28.854 cpumask for '\''job3'\'' is too big 00:18:28.854 Running I/O for 2 seconds... 00:18:28.854 00:18:28.854 Latency(us) 00:18:28.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370663.29 361.98 0.00 0.00 690.42 162.86 1466.76 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370680.82 361.99 0.00 0.00 690.25 185.30 1466.76 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370659.33 361.97 0.00 0.00 690.16 168.72 1490.16 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370641.00 361.95 0.00 0.00 690.07 175.54 1482.36 00:18:28.854 =================================================================================================================== 00:18:28.854 Total : 1482644.44 1447.89 0.00 0.00 690.22 162.86 1490.16' 00:18:28.854 10:22:33 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-10 10:22:31.016450] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:28.854 [2024-06-10 10:22:31.016654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:28.854 Using job config with 4 jobs 00:18:28.854 EAL: TSC is not safe to use in SMP mode 00:18:28.854 EAL: TSC is not invariant 00:18:28.854 [2024-06-10 10:22:31.527115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.854 [2024-06-10 10:22:31.610190] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:28.854 [2024-06-10 10:22:31.612454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.854 cpumask for '\''job0'\'' is too big 00:18:28.854 cpumask for '\''job1'\'' is too big 00:18:28.854 cpumask for '\''job2'\'' is too big 00:18:28.854 cpumask for '\''job3'\'' is too big 00:18:28.854 Running I/O for 2 seconds... 00:18:28.854 00:18:28.854 Latency(us) 00:18:28.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370663.29 361.98 0.00 0.00 690.42 162.86 1466.76 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370680.82 361.99 0.00 0.00 690.25 185.30 1466.76 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370659.33 361.97 0.00 0.00 690.16 168.72 1490.16 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370641.00 361.95 0.00 0.00 690.07 175.54 1482.36 00:18:28.854 =================================================================================================================== 00:18:28.854 Total : 1482644.44 1447.89 0.00 0.00 690.22 162.86 1490.16' 00:18:28.854 10:22:33 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 10:22:31.016450] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:28.854 [2024-06-10 10:22:31.016654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:28.854 Using job config with 4 jobs 00:18:28.854 EAL: TSC is not safe to use in SMP mode 00:18:28.854 EAL: TSC is not invariant 00:18:28.854 [2024-06-10 10:22:31.527115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.854 [2024-06-10 10:22:31.610190] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:28.854 [2024-06-10 10:22:31.612454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.854 cpumask for '\''job0'\'' is too big 00:18:28.854 cpumask for '\''job1'\'' is too big 00:18:28.854 cpumask for '\''job2'\'' is too big 00:18:28.854 cpumask for '\''job3'\'' is too big 00:18:28.854 Running I/O for 2 seconds... 00:18:28.854 00:18:28.854 Latency(us) 00:18:28.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370663.29 361.98 0.00 0.00 690.42 162.86 1466.76 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370680.82 361.99 0.00 0.00 690.25 185.30 1466.76 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.854 Malloc0 : 2.00 370659.33 361.97 0.00 0.00 690.16 168.72 1490.16 00:18:28.854 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:28.855 Malloc0 : 2.00 370641.00 361.95 0.00 0.00 690.07 175.54 1482.36 00:18:28.855 =================================================================================================================== 00:18:28.855 Total : 1482644.44 1447.89 0.00 0.00 690.22 162.86 1490.16' 00:18:28.855 10:22:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:28.855 10:22:33 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:28.855 10:22:33 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:18:28.855 10:22:33 bdevperf_config -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:28.855 [2024-06-10 10:22:33.847849] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:28.855 [2024-06-10 10:22:33.848002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:28.855 EAL: TSC is not safe to use in SMP mode 00:18:28.855 EAL: TSC is not invariant 00:18:28.855 [2024-06-10 10:22:34.367805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.855 [2024-06-10 10:22:34.450808] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:28.855 [2024-06-10 10:22:34.453063] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.113 cpumask for 'job0' is too big 00:18:29.113 cpumask for 'job1' is too big 00:18:29.113 cpumask for 'job2' is too big 00:18:29.113 cpumask for 'job3' is too big 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:18:31.642 Running I/O for 2 seconds... 00:18:31.642 00:18:31.642 Latency(us) 00:18:31.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.642 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:31.642 Malloc0 : 2.00 363376.92 354.86 0.00 0.00 704.25 182.37 1669.61 00:18:31.642 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:31.642 Malloc0 : 2.00 363356.59 354.84 0.00 0.00 704.13 180.42 1669.61 00:18:31.642 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:31.642 Malloc0 : 2.00 363330.09 354.81 0.00 0.00 704.06 173.59 1630.60 00:18:31.642 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:31.642 Malloc0 : 2.00 363379.73 354.86 0.00 0.00 703.84 103.86 1638.40 00:18:31.642 =================================================================================================================== 00:18:31.642 Total : 1453443.33 1419.38 0.00 0.00 704.07 103.86 1669.61' 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:31.642 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:31.642 10:22:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:31.643 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:31.643 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:31.643 10:22:36 bdevperf_config -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-10 10:22:36.703289] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:34.177 [2024-06-10 10:22:36.703462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:34.177 Using job config with 3 jobs 00:18:34.177 EAL: TSC is not safe to use in SMP mode 00:18:34.177 EAL: TSC is not invariant 00:18:34.177 [2024-06-10 10:22:37.179952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.177 [2024-06-10 10:22:37.264193] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:34.177 [2024-06-10 10:22:37.266447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.177 cpumask for '\''job0'\'' is too big 00:18:34.177 cpumask for '\''job1'\'' is too big 00:18:34.177 cpumask for '\''job2'\'' is too big 00:18:34.177 Running I/O for 2 seconds... 00:18:34.177 00:18:34.177 Latency(us) 00:18:34.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421170.68 411.30 0.00 0.00 607.56 246.74 1217.10 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421187.90 411.32 0.00 0.00 607.38 166.77 1201.49 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421172.54 411.30 0.00 0.00 607.30 140.43 1209.30 00:18:34.177 =================================================================================================================== 00:18:34.177 Total : 1263531.11 1233.92 0.00 0.00 607.41 140.43 1217.10' 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-10 10:22:36.703289] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:34.177 [2024-06-10 10:22:36.703462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:34.177 Using job config with 3 jobs 00:18:34.177 EAL: TSC is not safe to use in SMP mode 00:18:34.177 EAL: TSC is not invariant 00:18:34.177 [2024-06-10 10:22:37.179952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.177 [2024-06-10 10:22:37.264193] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:34.177 [2024-06-10 10:22:37.266447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.177 cpumask for '\''job0'\'' is too big 00:18:34.177 cpumask for '\''job1'\'' is too big 00:18:34.177 cpumask for '\''job2'\'' is too big 00:18:34.177 Running I/O for 2 seconds... 00:18:34.177 00:18:34.177 Latency(us) 00:18:34.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421170.68 411.30 0.00 0.00 607.56 246.74 1217.10 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421187.90 411.32 0.00 0.00 607.38 166.77 1201.49 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421172.54 411.30 0.00 0.00 607.30 140.43 1209.30 00:18:34.177 =================================================================================================================== 00:18:34.177 Total : 1263531.11 1233.92 0.00 0.00 607.41 140.43 1217.10' 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 10:22:36.703289] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:34.177 [2024-06-10 10:22:36.703462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:34.177 Using job config with 3 jobs 00:18:34.177 EAL: TSC is not safe to use in SMP mode 00:18:34.177 EAL: TSC is not invariant 00:18:34.177 [2024-06-10 10:22:37.179952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.177 [2024-06-10 10:22:37.264193] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:34.177 [2024-06-10 10:22:37.266447] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.177 cpumask for '\''job0'\'' is too big 00:18:34.177 cpumask for '\''job1'\'' is too big 00:18:34.177 cpumask for '\''job2'\'' is too big 00:18:34.177 Running I/O for 2 seconds... 00:18:34.177 00:18:34.177 Latency(us) 00:18:34.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421170.68 411.30 0.00 0.00 607.56 246.74 1217.10 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421187.90 411.32 0.00 0.00 607.38 166.77 1201.49 00:18:34.177 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:34.177 Malloc0 : 2.00 421172.54 411.30 0.00 0.00 607.30 140.43 1209.30 00:18:34.177 =================================================================================================================== 00:18:34.177 Total : 1263531.11 1233.92 0.00 0.00 607.41 140.43 1217.10' 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:34.177 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:34.177 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:34.177 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:34.177 10:22:39 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:34.178 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:34.178 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:34.178 10:22:39 bdevperf_config -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:37.468 10:22:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-10 10:22:39.525972] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:37.468 [2024-06-10 10:22:39.526226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:37.468 Using job config with 4 jobs 00:18:37.468 EAL: TSC is not safe to use in SMP mode 00:18:37.468 EAL: TSC is not invariant 00:18:37.468 [2024-06-10 10:22:40.011292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.468 [2024-06-10 10:22:40.123805] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:37.468 [2024-06-10 10:22:40.126874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.468 cpumask for '\''job0'\'' is too big 00:18:37.468 cpumask for '\''job1'\'' is too big 00:18:37.468 cpumask for '\''job2'\'' is too big 00:18:37.468 cpumask for '\''job3'\'' is too big 00:18:37.468 Running I/O for 2 seconds... 00:18:37.468 00:18:37.468 Latency(us) 00:18:37.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.00 151016.53 147.48 0.00 0.00 1694.72 916.72 6647.22 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.00 151030.19 147.49 0.00 0.00 1694.14 1076.66 6709.64 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.00 151020.92 147.48 0.00 0.00 1692.95 862.11 5835.82 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.00 151062.73 147.52 0.00 0.00 1692.12 877.71 5898.24 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.01 151091.70 147.55 0.00 0.00 1690.60 854.31 5024.43 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.01 151081.16 147.54 0.00 0.00 1690.31 725.58 4899.60 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.01 151164.15 147.62 0.00 0.00 1688.17 329.63 4462.69 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.01 151154.29 147.61 0.00 0.00 1687.95 255.51 4462.69 00:18:37.468 =================================================================================================================== 00:18:37.468 Total : 1208621.66 1180.29 0.00 0.00 1691.37 255.51 6709.64' 00:18:37.468 10:22:42 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-10 10:22:39.525972] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:37.468 [2024-06-10 10:22:39.526226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:37.468 Using job config with 4 jobs 00:18:37.468 EAL: TSC is not safe to use in SMP mode 00:18:37.468 EAL: TSC is not invariant 00:18:37.468 [2024-06-10 10:22:40.011292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.468 [2024-06-10 10:22:40.123805] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:37.468 [2024-06-10 10:22:40.126874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.468 cpumask for '\''job0'\'' is too big 00:18:37.468 cpumask for '\''job1'\'' is too big 00:18:37.468 cpumask for '\''job2'\'' is too big 00:18:37.468 cpumask for '\''job3'\'' is too big 00:18:37.468 Running I/O for 2 seconds... 00:18:37.468 00:18:37.468 Latency(us) 00:18:37.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.00 151016.53 147.48 0.00 0.00 1694.72 916.72 6647.22 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.00 151030.19 147.49 0.00 0.00 1694.14 1076.66 6709.64 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.00 151020.92 147.48 0.00 0.00 1692.95 862.11 5835.82 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.00 151062.73 147.52 0.00 0.00 1692.12 877.71 5898.24 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.01 151091.70 147.55 0.00 0.00 1690.60 854.31 5024.43 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.01 151081.16 147.54 0.00 0.00 1690.31 725.58 4899.60 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.01 151164.15 147.62 0.00 0.00 1688.17 329.63 4462.69 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc1 : 2.01 151154.29 147.61 0.00 0.00 1687.95 255.51 4462.69 00:18:37.468 =================================================================================================================== 00:18:37.468 Total : 1208621.66 1180.29 0.00 0.00 1691.37 255.51 6709.64' 00:18:37.468 10:22:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-06-10 10:22:39.525972] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:37.468 [2024-06-10 10:22:39.526226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:37.468 Using job config with 4 jobs 00:18:37.468 EAL: TSC is not safe to use in SMP mode 00:18:37.468 EAL: TSC is not invariant 00:18:37.468 [2024-06-10 10:22:40.011292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.468 [2024-06-10 10:22:40.123805] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:37.468 [2024-06-10 10:22:40.126874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.468 cpumask for '\''job0'\'' is too big 00:18:37.468 cpumask for '\''job1'\'' is too big 00:18:37.468 cpumask for '\''job2'\'' is too big 00:18:37.468 cpumask for '\''job3'\'' is too big 00:18:37.468 Running I/O for 2 seconds... 00:18:37.468 00:18:37.468 Latency(us) 00:18:37.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.468 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.468 Malloc0 : 2.00 151016.53 147.48 0.00 0.00 1694.72 916.72 6647.22 00:18:37.468 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc1 : 2.00 151030.19 147.49 0.00 0.00 1694.14 1076.66 6709.64 00:18:37.469 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc0 : 2.00 151020.92 147.48 0.00 0.00 1692.95 862.11 5835.82 00:18:37.469 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc1 : 2.00 151062.73 147.52 0.00 0.00 1692.12 877.71 5898.24 00:18:37.469 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc0 : 2.01 151091.70 147.55 0.00 0.00 1690.60 854.31 5024.43 00:18:37.469 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc1 : 2.01 151081.16 147.54 0.00 0.00 1690.31 725.58 4899.60 00:18:37.469 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc0 : 2.01 151164.15 147.62 0.00 0.00 1688.17 329.63 4462.69 00:18:37.469 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:37.469 Malloc1 : 2.01 151154.29 147.61 0.00 0.00 1687.95 255.51 4462.69 00:18:37.469 =================================================================================================================== 00:18:37.469 Total : 1208621.66 1180.29 0.00 0.00 1691.37 255.51 6709.64' 00:18:37.469 10:22:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:37.469 10:22:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:37.469 10:22:42 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:18:37.469 10:22:42 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:18:37.469 10:22:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:37.469 10:22:42 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:37.469 00:18:37.469 real 0m11.607s 00:18:37.469 user 0m9.250s 00:18:37.469 sys 0m2.453s 00:18:37.469 10:22:42 bdevperf_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:37.469 10:22:42 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:18:37.469 ************************************ 00:18:37.469 END TEST bdevperf_config 00:18:37.469 ************************************ 00:18:37.469 10:22:42 -- spdk/autotest.sh@192 -- # uname -s 00:18:37.469 10:22:42 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:18:37.469 10:22:42 -- spdk/autotest.sh@198 -- # uname -s 00:18:37.469 10:22:42 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:18:37.469 10:22:42 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:18:37.469 10:22:42 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:37.469 10:22:42 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:37.469 10:22:42 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:37.469 10:22:42 -- common/autotest_common.sh@10 -- # set +x 00:18:37.469 ************************************ 00:18:37.469 START TEST blockdev_nvme 00:18:37.469 ************************************ 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:37.469 * Looking for test storage... 00:18:37.469 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:37.469 10:22:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69063 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 69063 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@830 -- # '[' -z 69063 ']' 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:37.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.469 10:22:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:37.469 10:22:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:37.469 [2024-06-10 10:22:42.603152] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:37.469 [2024-06-10 10:22:42.603294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:37.469 EAL: TSC is not safe to use in SMP mode 00:18:37.469 EAL: TSC is not invariant 00:18:37.469 [2024-06-10 10:22:43.067329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.727 [2024-06-10 10:22:43.149007] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:37.727 [2024-06-10 10:22:43.151156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.985 10:22:43 blockdev_nvme -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:37.985 10:22:43 blockdev_nvme -- common/autotest_common.sh@863 -- # return 0 00:18:37.985 10:22:43 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:18:37.985 10:22:43 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:18:37.985 10:22:43 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:18:37.985 10:22:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:37.985 10:22:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.244 [2024-06-10 10:22:43.619706] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "5fca982c-2713-11ef-b084-113036b5c18d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5fca982c-2713-11ef-b084-113036b5c18d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:38.244 10:22:43 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 69063 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@949 -- # '[' -z 69063 ']' 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@953 -- # kill -0 69063 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@954 -- # uname 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@957 -- # ps -c -o command 69063 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@957 -- # tail -1 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:18:38.244 killing process with pid 69063 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69063' 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@968 -- # kill 69063 00:18:38.244 10:22:43 blockdev_nvme -- common/autotest_common.sh@973 -- # wait 69063 00:18:38.503 10:22:44 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:38.503 10:22:44 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:38.503 10:22:44 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:18:38.503 10:22:44 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:38.503 10:22:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:38.503 ************************************ 00:18:38.503 START TEST bdev_hello_world 00:18:38.503 ************************************ 00:18:38.503 10:22:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:38.503 [2024-06-10 10:22:44.046589] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:38.503 [2024-06-10 10:22:44.046790] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:39.088 EAL: TSC is not safe to use in SMP mode 00:18:39.088 EAL: TSC is not invariant 00:18:39.088 [2024-06-10 10:22:44.505731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.088 [2024-06-10 10:22:44.603059] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:39.088 [2024-06-10 10:22:44.605741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.088 [2024-06-10 10:22:44.664897] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:39.348 [2024-06-10 10:22:44.734740] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:39.348 [2024-06-10 10:22:44.734803] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:39.348 [2024-06-10 10:22:44.734818] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:39.348 [2024-06-10 10:22:44.735597] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:39.348 [2024-06-10 10:22:44.735950] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:39.348 [2024-06-10 10:22:44.735990] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:39.348 [2024-06-10 10:22:44.736173] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:39.348 00:18:39.348 [2024-06-10 10:22:44.736204] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:39.348 00:18:39.348 real 0m0.879s 00:18:39.348 user 0m0.371s 00:18:39.348 sys 0m0.506s 00:18:39.348 10:22:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:39.348 10:22:44 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:39.348 ************************************ 00:18:39.348 END TEST bdev_hello_world 00:18:39.348 ************************************ 00:18:39.348 10:22:44 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:39.348 10:22:44 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:18:39.348 10:22:44 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:39.348 10:22:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 ************************************ 00:18:39.608 START TEST bdev_bounds 00:18:39.608 ************************************ 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # bdev_bounds '' 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=69134 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:39.608 Process bdevio pid: 69134 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 69134' 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 69134 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@830 -- # '[' -z 69134 ']' 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local max_retries=100 00:18:39.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # xtrace_disable 00:18:39.608 10:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:39.608 [2024-06-10 10:22:44.967223] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:39.608 [2024-06-10 10:22:44.967456] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:39.867 EAL: TSC is not safe to use in SMP mode 00:18:39.867 EAL: TSC is not invariant 00:18:39.867 [2024-06-10 10:22:45.448851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:40.125 [2024-06-10 10:22:45.538578] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:40.125 [2024-06-10 10:22:45.538646] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:40.125 [2024-06-10 10:22:45.538663] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:18:40.125 [2024-06-10 10:22:45.542159] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.125 [2024-06-10 10:22:45.542267] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.125 [2024-06-10 10:22:45.542264] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.125 [2024-06-10 10:22:45.599442] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@863 -- # return 0 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:40.692 I/O targets: 00:18:40.692 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:40.692 00:18:40.692 00:18:40.692 CUnit - A unit testing framework for C - Version 2.1-3 00:18:40.692 http://cunit.sourceforge.net/ 00:18:40.692 00:18:40.692 00:18:40.692 Suite: bdevio tests on: Nvme0n1 00:18:40.692 Test: blockdev write read block ...passed 00:18:40.692 Test: blockdev write zeroes read block ...passed 00:18:40.692 Test: blockdev write zeroes read no split ...passed 00:18:40.692 Test: blockdev write zeroes read split ...passed 00:18:40.692 Test: blockdev write zeroes read split partial ...passed 00:18:40.692 Test: blockdev reset ...[2024-06-10 10:22:46.141150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:40.692 [2024-06-10 10:22:46.142461] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:40.692 passed 00:18:40.692 Test: blockdev write read 8 blocks ...passed 00:18:40.692 Test: blockdev write read size > 128k ...passed 00:18:40.692 Test: blockdev write read invalid size ...passed 00:18:40.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:40.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:40.692 Test: blockdev write read max offset ...passed 00:18:40.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:40.692 Test: blockdev writev readv 8 blocks ...passed 00:18:40.692 Test: blockdev writev readv 30 x 1block ...passed 00:18:40.692 Test: blockdev writev readv block ...passed 00:18:40.692 Test: blockdev writev readv size > 128k ...passed 00:18:40.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:40.692 Test: blockdev comparev and writev ...[2024-06-10 10:22:46.145924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1c7947000 len:0x1000 00:18:40.692 [2024-06-10 10:22:46.145967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:40.692 passed 00:18:40.692 Test: blockdev nvme passthru rw ...passed 00:18:40.692 Test: blockdev nvme passthru vendor specific ...passed 00:18:40.692 Test: blockdev nvme admin passthru ...[2024-06-10 10:22:46.146450] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:40.692 [2024-06-10 10:22:46.146469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:40.692 passed 00:18:40.692 Test: blockdev copy ...passed 00:18:40.692 00:18:40.692 Run Summary: Type Total Ran Passed Failed Inactive 00:18:40.692 suites 1 1 n/a 0 0 00:18:40.692 tests 23 23 23 0 0 00:18:40.692 asserts 152 152 152 0 n/a 00:18:40.692 00:18:40.692 Elapsed time = 0.031 seconds 00:18:40.692 0 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 69134 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@949 -- # '[' -z 69134 ']' 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # kill -0 69134 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # uname 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # ps -c -o command 69134 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # tail -1 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # process_name=bdevio 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' bdevio = sudo ']' 00:18:40.692 killing process with pid 69134 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69134' 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # kill 69134 00:18:40.692 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # wait 69134 00:18:40.950 10:22:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:40.950 00:18:40.950 real 0m1.410s 00:18:40.950 user 0m2.839s 00:18:40.950 sys 0m0.588s 00:18:40.950 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:40.950 10:22:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:40.950 ************************************ 00:18:40.950 END TEST bdev_bounds 00:18:40.950 ************************************ 00:18:40.950 10:22:46 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:40.950 10:22:46 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:18:40.950 10:22:46 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:40.950 10:22:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:40.950 ************************************ 00:18:40.950 START TEST bdev_nbd 00:18:40.951 ************************************ 00:18:40.951 10:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:40.951 10:22:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:40.951 10:22:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:18:40.951 10:22:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:18:40.951 00:18:40.951 real 0m0.004s 00:18:40.951 user 0m0.003s 00:18:40.951 sys 0m0.001s 00:18:40.951 10:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:40.951 ************************************ 00:18:40.951 END TEST bdev_nbd 00:18:40.951 10:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:40.951 ************************************ 00:18:40.951 10:22:46 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:40.951 10:22:46 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:18:40.951 skipping fio tests on NVMe due to multi-ns failures. 00:18:40.951 10:22:46 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:40.951 10:22:46 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:40.951 10:22:46 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:40.951 10:22:46 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:18:40.951 10:22:46 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:40.951 10:22:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:40.951 ************************************ 00:18:40.951 START TEST bdev_verify 00:18:40.951 ************************************ 00:18:40.951 10:22:46 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:40.951 [2024-06-10 10:22:46.457702] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:40.951 [2024-06-10 10:22:46.457948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:41.517 EAL: TSC is not safe to use in SMP mode 00:18:41.517 EAL: TSC is not invariant 00:18:41.517 [2024-06-10 10:22:46.952152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.517 [2024-06-10 10:22:47.047496] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.517 [2024-06-10 10:22:47.047560] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:41.518 [2024-06-10 10:22:47.050981] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.518 [2024-06-10 10:22:47.050970] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.518 [2024-06-10 10:22:47.109922] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:41.777 Running I/O for 5 seconds... 00:18:47.043 00:18:47.043 Latency(us) 00:18:47.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.043 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:47.043 Verification LBA range: start 0x0 length 0xa0000 00:18:47.043 Nvme0n1 : 5.00 19402.69 75.79 0.00 0.00 6587.43 647.56 14667.58 00:18:47.043 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:47.043 Verification LBA range: start 0xa0000 length 0xa0000 00:18:47.043 Nvme0n1 : 5.00 19156.95 74.83 0.00 0.00 6672.59 651.46 12483.05 00:18:47.043 =================================================================================================================== 00:18:47.044 Total : 38559.64 150.62 0.00 0.00 6629.74 647.56 14667.58 00:18:47.301 00:18:47.301 real 0m6.431s 00:18:47.301 user 0m11.511s 00:18:47.301 sys 0m0.571s 00:18:47.301 10:22:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:47.301 10:22:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:47.301 ************************************ 00:18:47.301 END TEST bdev_verify 00:18:47.301 ************************************ 00:18:47.559 10:22:52 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:47.559 10:22:52 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 16 -le 1 ']' 00:18:47.559 10:22:52 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:47.559 10:22:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:47.559 ************************************ 00:18:47.559 START TEST bdev_verify_big_io 00:18:47.559 ************************************ 00:18:47.559 10:22:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:47.559 [2024-06-10 10:22:52.938755] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:47.559 [2024-06-10 10:22:52.939008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:48.124 EAL: TSC is not safe to use in SMP mode 00:18:48.124 EAL: TSC is not invariant 00:18:48.124 [2024-06-10 10:22:53.434197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:48.124 [2024-06-10 10:22:53.536123] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:48.124 [2024-06-10 10:22:53.536199] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:48.124 [2024-06-10 10:22:53.539682] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.124 [2024-06-10 10:22:53.539668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.124 [2024-06-10 10:22:53.598761] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:48.124 Running I/O for 5 seconds... 00:18:53.404 00:18:53.404 Latency(us) 00:18:53.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.404 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:53.404 Verification LBA range: start 0x0 length 0xa000 00:18:53.404 Nvme0n1 : 5.01 8029.18 501.82 0.00 0.00 15844.09 198.95 26339.23 00:18:53.404 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:53.404 Verification LBA range: start 0xa000 length 0xa000 00:18:53.404 Nvme0n1 : 5.01 7933.30 495.83 0.00 0.00 16044.97 327.68 24217.11 00:18:53.404 =================================================================================================================== 00:18:53.404 Total : 15962.48 997.65 0.00 0.00 15943.94 198.95 26339.23 00:18:56.692 00:18:56.692 real 0m8.801s 00:18:56.692 user 0m16.270s 00:18:56.692 sys 0m0.565s 00:18:56.692 10:23:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:56.692 10:23:01 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.692 ************************************ 00:18:56.692 END TEST bdev_verify_big_io 00:18:56.692 ************************************ 00:18:56.692 10:23:01 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.692 10:23:01 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:18:56.692 10:23:01 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:56.692 10:23:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.692 ************************************ 00:18:56.692 START TEST bdev_write_zeroes 00:18:56.692 ************************************ 00:18:56.692 10:23:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.692 [2024-06-10 10:23:01.781062] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:56.692 [2024-06-10 10:23:01.781240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:56.692 EAL: TSC is not safe to use in SMP mode 00:18:56.692 EAL: TSC is not invariant 00:18:56.692 [2024-06-10 10:23:02.268167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.949 [2024-06-10 10:23:02.362278] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:56.949 [2024-06-10 10:23:02.365116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.949 [2024-06-10 10:23:02.424535] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:56.949 Running I/O for 1 seconds... 00:18:58.322 00:18:58.322 Latency(us) 00:18:58.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.322 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:58.322 Nvme0n1 : 1.00 58202.75 227.35 0.00 0.00 2197.35 670.96 16352.79 00:18:58.322 =================================================================================================================== 00:18:58.322 Total : 58202.75 227.35 0.00 0.00 2197.35 670.96 16352.79 00:18:58.322 00:18:58.322 real 0m2.019s 00:18:58.322 user 0m1.474s 00:18:58.322 sys 0m0.542s 00:18:58.322 10:23:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:58.322 10:23:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:58.322 ************************************ 00:18:58.322 END TEST bdev_write_zeroes 00:18:58.322 ************************************ 00:18:58.322 10:23:03 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:58.322 10:23:03 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:18:58.322 10:23:03 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:58.322 10:23:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.322 ************************************ 00:18:58.322 START TEST bdev_json_nonenclosed 00:18:58.322 ************************************ 00:18:58.322 10:23:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:58.322 [2024-06-10 10:23:03.846440] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:58.322 [2024-06-10 10:23:03.846799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:18:58.889 EAL: TSC is not safe to use in SMP mode 00:18:58.889 EAL: TSC is not invariant 00:18:58.889 [2024-06-10 10:23:04.302730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.889 [2024-06-10 10:23:04.427051] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:58.889 [2024-06-10 10:23:04.429850] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.889 [2024-06-10 10:23:04.429892] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:58.889 [2024-06-10 10:23:04.429905] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:58.889 [2024-06-10 10:23:04.429913] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:59.148 00:18:59.148 real 0m0.763s 00:18:59.148 user 0m0.250s 00:18:59.148 sys 0m0.511s 00:18:59.148 10:23:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:18:59.148 10:23:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:59.148 ************************************ 00:18:59.148 END TEST bdev_json_nonenclosed 00:18:59.148 ************************************ 00:18:59.148 10:23:04 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:59.148 10:23:04 blockdev_nvme -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:18:59.148 10:23:04 blockdev_nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:18:59.148 10:23:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:59.148 ************************************ 00:18:59.148 START TEST bdev_json_nonarray 00:18:59.148 ************************************ 00:18:59.148 10:23:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:59.148 [2024-06-10 10:23:04.655258] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:18:59.148 [2024-06-10 10:23:04.655520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:00.085 EAL: TSC is not safe to use in SMP mode 00:19:00.085 EAL: TSC is not invariant 00:19:00.085 [2024-06-10 10:23:05.439120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.085 [2024-06-10 10:23:05.545403] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:00.085 [2024-06-10 10:23:05.548254] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.085 [2024-06-10 10:23:05.548299] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:00.085 [2024-06-10 10:23:05.548309] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:00.085 [2024-06-10 10:23:05.548316] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:00.345 00:19:00.345 real 0m1.081s 00:19:00.345 user 0m0.236s 00:19:00.345 sys 0m0.842s 00:19:00.345 10:23:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:00.345 ************************************ 00:19:00.345 END TEST bdev_json_nonarray 00:19:00.345 ************************************ 00:19:00.345 10:23:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:19:00.345 10:23:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:19:00.345 00:19:00.345 real 0m23.343s 00:19:00.345 user 0m34.527s 00:19:00.345 sys 0m5.065s 00:19:00.345 10:23:05 blockdev_nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:00.345 10:23:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.345 ************************************ 00:19:00.345 END TEST blockdev_nvme 00:19:00.345 ************************************ 00:19:00.345 10:23:05 -- spdk/autotest.sh@213 -- # uname -s 00:19:00.345 10:23:05 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:19:00.345 10:23:05 -- spdk/autotest.sh@216 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:00.345 10:23:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:00.345 10:23:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:00.345 10:23:05 -- common/autotest_common.sh@10 -- # set +x 00:19:00.345 ************************************ 00:19:00.345 START TEST nvme 00:19:00.345 ************************************ 00:19:00.345 10:23:05 nvme -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:00.603 * Looking for test storage... 00:19:00.603 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:00.603 10:23:05 nvme -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:00.603 hw.nic_uio.bdfs="0:16:0" 00:19:00.603 10:23:06 nvme -- nvme/nvme.sh@79 -- # uname 00:19:00.603 10:23:06 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:19:00.603 10:23:06 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:00.603 10:23:06 nvme -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:19:00.603 10:23:06 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:00.603 10:23:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.603 ************************************ 00:19:00.603 START TEST nvme_reset 00:19:00.603 ************************************ 00:19:00.603 10:23:06 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:01.171 EAL: TSC is not safe to use in SMP mode 00:19:01.171 EAL: TSC is not invariant 00:19:01.171 [2024-06-10 10:23:06.704552] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:01.171 Initializing NVMe Controllers 00:19:01.171 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:01.171 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:01.171 00:19:01.171 real 0m0.586s 00:19:01.171 user 0m0.001s 00:19:01.171 sys 0m0.584s 00:19:01.171 10:23:06 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:01.171 ************************************ 00:19:01.171 END TEST nvme_reset 00:19:01.171 ************************************ 00:19:01.171 10:23:06 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:01.430 10:23:06 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:01.430 10:23:06 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:01.430 10:23:06 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:01.430 10:23:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.430 ************************************ 00:19:01.430 START TEST nvme_identify 00:19:01.430 ************************************ 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # nvme_identify 00:19:01.430 10:23:06 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:01.430 10:23:06 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:01.430 10:23:06 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:01.430 10:23:06 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1512 -- # local bdfs 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:01.430 10:23:06 nvme.nvme_identify -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:19:01.430 10:23:06 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:02.044 EAL: TSC is not safe to use in SMP mode 00:19:02.044 EAL: TSC is not invariant 00:19:02.044 [2024-06-10 10:23:07.332103] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:02.044 ===================================================== 00:19:02.044 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:02.044 ===================================================== 00:19:02.044 Controller Capabilities/Features 00:19:02.044 ================================ 00:19:02.044 Vendor ID: 1b36 00:19:02.044 Subsystem Vendor ID: 1af4 00:19:02.044 Serial Number: 12340 00:19:02.044 Model Number: QEMU NVMe Ctrl 00:19:02.044 Firmware Version: 8.0.0 00:19:02.044 Recommended Arb Burst: 6 00:19:02.044 IEEE OUI Identifier: 00 54 52 00:19:02.044 Multi-path I/O 00:19:02.044 May have multiple subsystem ports: No 00:19:02.044 May have multiple controllers: No 00:19:02.044 Associated with SR-IOV VF: No 00:19:02.044 Max Data Transfer Size: 524288 00:19:02.044 Max Number of Namespaces: 256 00:19:02.044 Max Number of I/O Queues: 64 00:19:02.044 NVMe Specification Version (VS): 1.4 00:19:02.044 NVMe Specification Version (Identify): 1.4 00:19:02.044 Maximum Queue Entries: 2048 00:19:02.044 Contiguous Queues Required: Yes 00:19:02.044 Arbitration Mechanisms Supported 00:19:02.044 Weighted Round Robin: Not Supported 00:19:02.044 Vendor Specific: Not Supported 00:19:02.044 Reset Timeout: 7500 ms 00:19:02.044 Doorbell Stride: 4 bytes 00:19:02.044 NVM Subsystem Reset: Not Supported 00:19:02.044 Command Sets Supported 00:19:02.044 NVM Command Set: Supported 00:19:02.044 Boot Partition: Not Supported 00:19:02.044 Memory Page Size Minimum: 4096 bytes 00:19:02.044 Memory Page Size Maximum: 65536 bytes 00:19:02.044 Persistent Memory Region: Not Supported 00:19:02.044 Optional Asynchronous Events Supported 00:19:02.044 Namespace Attribute Notices: Supported 00:19:02.044 Firmware Activation Notices: Not Supported 00:19:02.044 ANA Change Notices: Not Supported 00:19:02.044 PLE Aggregate Log Change Notices: Not Supported 00:19:02.044 LBA Status Info Alert Notices: Not Supported 00:19:02.044 EGE Aggregate Log Change Notices: Not Supported 00:19:02.044 Normal NVM Subsystem Shutdown event: Not Supported 00:19:02.044 Zone Descriptor Change Notices: Not Supported 00:19:02.044 Discovery Log Change Notices: Not Supported 00:19:02.044 Controller Attributes 00:19:02.044 128-bit Host Identifier: Not Supported 00:19:02.044 Non-Operational Permissive Mode: Not Supported 00:19:02.044 NVM Sets: Not Supported 00:19:02.044 Read Recovery Levels: Not Supported 00:19:02.044 Endurance Groups: Not Supported 00:19:02.044 Predictable Latency Mode: Not Supported 00:19:02.044 Traffic Based Keep ALive: Not Supported 00:19:02.044 Namespace Granularity: Not Supported 00:19:02.044 SQ Associations: Not Supported 00:19:02.044 UUID List: Not Supported 00:19:02.044 Multi-Domain Subsystem: Not Supported 00:19:02.044 Fixed Capacity Management: Not Supported 00:19:02.044 Variable Capacity Management: Not Supported 00:19:02.044 Delete Endurance Group: Not Supported 00:19:02.044 Delete NVM Set: Not Supported 00:19:02.044 Extended LBA Formats Supported: Supported 00:19:02.044 Flexible Data Placement Supported: Not Supported 00:19:02.044 00:19:02.044 Controller Memory Buffer Support 00:19:02.044 ================================ 00:19:02.044 Supported: No 00:19:02.044 00:19:02.044 Persistent Memory Region Support 00:19:02.044 ================================ 00:19:02.044 Supported: No 00:19:02.044 00:19:02.044 Admin Command Set Attributes 00:19:02.044 ============================ 00:19:02.044 Security Send/Receive: Not Supported 00:19:02.044 Format NVM: Supported 00:19:02.044 Firmware Activate/Download: Not Supported 00:19:02.044 Namespace Management: Supported 00:19:02.044 Device Self-Test: Not Supported 00:19:02.044 Directives: Supported 00:19:02.044 NVMe-MI: Not Supported 00:19:02.044 Virtualization Management: Not Supported 00:19:02.044 Doorbell Buffer Config: Supported 00:19:02.044 Get LBA Status Capability: Not Supported 00:19:02.044 Command & Feature Lockdown Capability: Not Supported 00:19:02.044 Abort Command Limit: 4 00:19:02.044 Async Event Request Limit: 4 00:19:02.044 Number of Firmware Slots: N/A 00:19:02.044 Firmware Slot 1 Read-Only: N/A 00:19:02.044 Firmware Activation Without Reset: N/A 00:19:02.044 Multiple Update Detection Support: N/A 00:19:02.044 Firmware Update Granularity: No Information Provided 00:19:02.044 Per-Namespace SMART Log: Yes 00:19:02.044 Asymmetric Namespace Access Log Page: Not Supported 00:19:02.044 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:02.044 Command Effects Log Page: Supported 00:19:02.044 Get Log Page Extended Data: Supported 00:19:02.044 Telemetry Log Pages: Not Supported 00:19:02.044 Persistent Event Log Pages: Not Supported 00:19:02.044 Supported Log Pages Log Page: May Support 00:19:02.044 Commands Supported & Effects Log Page: Not Supported 00:19:02.044 Feature Identifiers & Effects Log Page:May Support 00:19:02.044 NVMe-MI Commands & Effects Log Page: May Support 00:19:02.044 Data Area 4 for Telemetry Log: Not Supported 00:19:02.044 Error Log Page Entries Supported: 1 00:19:02.044 Keep Alive: Not Supported 00:19:02.044 00:19:02.044 NVM Command Set Attributes 00:19:02.044 ========================== 00:19:02.044 Submission Queue Entry Size 00:19:02.044 Max: 64 00:19:02.044 Min: 64 00:19:02.044 Completion Queue Entry Size 00:19:02.044 Max: 16 00:19:02.044 Min: 16 00:19:02.044 Number of Namespaces: 256 00:19:02.044 Compare Command: Supported 00:19:02.044 Write Uncorrectable Command: Not Supported 00:19:02.044 Dataset Management Command: Supported 00:19:02.044 Write Zeroes Command: Supported 00:19:02.044 Set Features Save Field: Supported 00:19:02.044 Reservations: Not Supported 00:19:02.044 Timestamp: Supported 00:19:02.044 Copy: Supported 00:19:02.044 Volatile Write Cache: Present 00:19:02.044 Atomic Write Unit (Normal): 1 00:19:02.044 Atomic Write Unit (PFail): 1 00:19:02.044 Atomic Compare & Write Unit: 1 00:19:02.044 Fused Compare & Write: Not Supported 00:19:02.044 Scatter-Gather List 00:19:02.044 SGL Command Set: Supported 00:19:02.044 SGL Keyed: Not Supported 00:19:02.044 SGL Bit Bucket Descriptor: Not Supported 00:19:02.044 SGL Metadata Pointer: Not Supported 00:19:02.044 Oversized SGL: Not Supported 00:19:02.044 SGL Metadata Address: Not Supported 00:19:02.044 SGL Offset: Not Supported 00:19:02.044 Transport SGL Data Block: Not Supported 00:19:02.044 Replay Protected Memory Block: Not Supported 00:19:02.044 00:19:02.044 Firmware Slot Information 00:19:02.045 ========================= 00:19:02.045 Active slot: 1 00:19:02.045 Slot 1 Firmware Revision: 1.0 00:19:02.045 00:19:02.045 00:19:02.045 Commands Supported and Effects 00:19:02.045 ============================== 00:19:02.045 Admin Commands 00:19:02.045 -------------- 00:19:02.045 Delete I/O Submission Queue (00h): Supported 00:19:02.045 Create I/O Submission Queue (01h): Supported 00:19:02.045 Get Log Page (02h): Supported 00:19:02.045 Delete I/O Completion Queue (04h): Supported 00:19:02.045 Create I/O Completion Queue (05h): Supported 00:19:02.045 Identify (06h): Supported 00:19:02.045 Abort (08h): Supported 00:19:02.045 Set Features (09h): Supported 00:19:02.045 Get Features (0Ah): Supported 00:19:02.045 Asynchronous Event Request (0Ch): Supported 00:19:02.045 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:02.045 Directive Send (19h): Supported 00:19:02.045 Directive Receive (1Ah): Supported 00:19:02.045 Virtualization Management (1Ch): Supported 00:19:02.045 Doorbell Buffer Config (7Ch): Supported 00:19:02.045 Format NVM (80h): Supported LBA-Change 00:19:02.045 I/O Commands 00:19:02.045 ------------ 00:19:02.045 Flush (00h): Supported LBA-Change 00:19:02.045 Write (01h): Supported LBA-Change 00:19:02.045 Read (02h): Supported 00:19:02.045 Compare (05h): Supported 00:19:02.045 Write Zeroes (08h): Supported LBA-Change 00:19:02.045 Dataset Management (09h): Supported LBA-Change 00:19:02.045 Unknown (0Ch): Supported 00:19:02.045 Unknown (12h): Supported 00:19:02.045 Copy (19h): Supported LBA-Change 00:19:02.045 Unknown (1Dh): Supported LBA-Change 00:19:02.045 00:19:02.045 Error Log 00:19:02.045 ========= 00:19:02.045 00:19:02.045 Arbitration 00:19:02.045 =========== 00:19:02.045 Arbitration Burst: no limit 00:19:02.045 00:19:02.045 Power Management 00:19:02.045 ================ 00:19:02.045 Number of Power States: 1 00:19:02.045 Current Power State: Power State #0 00:19:02.045 Power State #0: 00:19:02.045 Max Power: 25.00 W 00:19:02.045 Non-Operational State: Operational 00:19:02.045 Entry Latency: 16 microseconds 00:19:02.045 Exit Latency: 4 microseconds 00:19:02.045 Relative Read Throughput: 0 00:19:02.045 Relative Read Latency: 0 00:19:02.045 Relative Write Throughput: 0 00:19:02.045 Relative Write Latency: 0 00:19:02.045 Idle Power: Not Reported 00:19:02.045 Active Power: Not Reported 00:19:02.045 Non-Operational Permissive Mode: Not Supported 00:19:02.045 00:19:02.045 Health Information 00:19:02.045 ================== 00:19:02.045 Critical Warnings: 00:19:02.045 Available Spare Space: OK 00:19:02.045 Temperature: OK 00:19:02.045 Device Reliability: OK 00:19:02.045 Read Only: No 00:19:02.045 Volatile Memory Backup: OK 00:19:02.045 Current Temperature: 323 Kelvin (50 Celsius) 00:19:02.045 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:02.045 Available Spare: 0% 00:19:02.045 Available Spare Threshold: 0% 00:19:02.045 Life Percentage Used: 0% 00:19:02.045 Data Units Read: 11805 00:19:02.045 Data Units Written: 11790 00:19:02.045 Host Read Commands: 273163 00:19:02.045 Host Write Commands: 273012 00:19:02.045 Controller Busy Time: 0 minutes 00:19:02.045 Power Cycles: 0 00:19:02.045 Power On Hours: 0 hours 00:19:02.045 Unsafe Shutdowns: 0 00:19:02.045 Unrecoverable Media Errors: 0 00:19:02.045 Lifetime Error Log Entries: 0 00:19:02.045 Warning Temperature Time: 0 minutes 00:19:02.045 Critical Temperature Time: 0 minutes 00:19:02.045 00:19:02.045 Number of Queues 00:19:02.045 ================ 00:19:02.045 Number of I/O Submission Queues: 64 00:19:02.045 Number of I/O Completion Queues: 64 00:19:02.045 00:19:02.045 ZNS Specific Controller Data 00:19:02.045 ============================ 00:19:02.045 Zone Append Size Limit: 0 00:19:02.045 00:19:02.045 00:19:02.045 Active Namespaces 00:19:02.045 ================= 00:19:02.045 Namespace ID:1 00:19:02.045 Error Recovery Timeout: Unlimited 00:19:02.045 Command Set Identifier: NVM (00h) 00:19:02.045 Deallocate: Supported 00:19:02.045 Deallocated/Unwritten Error: Supported 00:19:02.045 Deallocated Read Value: All 0x00 00:19:02.045 Deallocate in Write Zeroes: Not Supported 00:19:02.045 Deallocated Guard Field: 0xFFFF 00:19:02.045 Flush: Supported 00:19:02.045 Reservation: Not Supported 00:19:02.045 Namespace Sharing Capabilities: Private 00:19:02.045 Size (in LBAs): 1310720 (5GiB) 00:19:02.045 Capacity (in LBAs): 1310720 (5GiB) 00:19:02.045 Utilization (in LBAs): 1310720 (5GiB) 00:19:02.045 Thin Provisioning: Not Supported 00:19:02.045 Per-NS Atomic Units: No 00:19:02.045 Maximum Single Source Range Length: 128 00:19:02.045 Maximum Copy Length: 128 00:19:02.045 Maximum Source Range Count: 128 00:19:02.045 NGUID/EUI64 Never Reused: No 00:19:02.045 Namespace Write Protected: No 00:19:02.045 Number of LBA Formats: 8 00:19:02.045 Current LBA Format: LBA Format #04 00:19:02.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:02.045 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:02.045 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:02.045 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:02.045 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:02.045 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:02.045 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:02.045 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:02.045 00:19:02.045 10:23:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:02.045 10:23:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:02.305 EAL: TSC is not safe to use in SMP mode 00:19:02.305 EAL: TSC is not invariant 00:19:02.305 [2024-06-10 10:23:07.875034] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:02.305 ===================================================== 00:19:02.305 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:02.305 ===================================================== 00:19:02.305 Controller Capabilities/Features 00:19:02.305 ================================ 00:19:02.305 Vendor ID: 1b36 00:19:02.305 Subsystem Vendor ID: 1af4 00:19:02.305 Serial Number: 12340 00:19:02.305 Model Number: QEMU NVMe Ctrl 00:19:02.305 Firmware Version: 8.0.0 00:19:02.305 Recommended Arb Burst: 6 00:19:02.305 IEEE OUI Identifier: 00 54 52 00:19:02.305 Multi-path I/O 00:19:02.305 May have multiple subsystem ports: No 00:19:02.305 May have multiple controllers: No 00:19:02.305 Associated with SR-IOV VF: No 00:19:02.305 Max Data Transfer Size: 524288 00:19:02.305 Max Number of Namespaces: 256 00:19:02.305 Max Number of I/O Queues: 64 00:19:02.305 NVMe Specification Version (VS): 1.4 00:19:02.305 NVMe Specification Version (Identify): 1.4 00:19:02.305 Maximum Queue Entries: 2048 00:19:02.305 Contiguous Queues Required: Yes 00:19:02.305 Arbitration Mechanisms Supported 00:19:02.305 Weighted Round Robin: Not Supported 00:19:02.305 Vendor Specific: Not Supported 00:19:02.305 Reset Timeout: 7500 ms 00:19:02.305 Doorbell Stride: 4 bytes 00:19:02.305 NVM Subsystem Reset: Not Supported 00:19:02.305 Command Sets Supported 00:19:02.305 NVM Command Set: Supported 00:19:02.305 Boot Partition: Not Supported 00:19:02.305 Memory Page Size Minimum: 4096 bytes 00:19:02.305 Memory Page Size Maximum: 65536 bytes 00:19:02.305 Persistent Memory Region: Not Supported 00:19:02.305 Optional Asynchronous Events Supported 00:19:02.305 Namespace Attribute Notices: Supported 00:19:02.305 Firmware Activation Notices: Not Supported 00:19:02.305 ANA Change Notices: Not Supported 00:19:02.305 PLE Aggregate Log Change Notices: Not Supported 00:19:02.305 LBA Status Info Alert Notices: Not Supported 00:19:02.305 EGE Aggregate Log Change Notices: Not Supported 00:19:02.305 Normal NVM Subsystem Shutdown event: Not Supported 00:19:02.305 Zone Descriptor Change Notices: Not Supported 00:19:02.305 Discovery Log Change Notices: Not Supported 00:19:02.305 Controller Attributes 00:19:02.305 128-bit Host Identifier: Not Supported 00:19:02.305 Non-Operational Permissive Mode: Not Supported 00:19:02.305 NVM Sets: Not Supported 00:19:02.305 Read Recovery Levels: Not Supported 00:19:02.305 Endurance Groups: Not Supported 00:19:02.305 Predictable Latency Mode: Not Supported 00:19:02.305 Traffic Based Keep ALive: Not Supported 00:19:02.305 Namespace Granularity: Not Supported 00:19:02.305 SQ Associations: Not Supported 00:19:02.305 UUID List: Not Supported 00:19:02.305 Multi-Domain Subsystem: Not Supported 00:19:02.305 Fixed Capacity Management: Not Supported 00:19:02.305 Variable Capacity Management: Not Supported 00:19:02.305 Delete Endurance Group: Not Supported 00:19:02.305 Delete NVM Set: Not Supported 00:19:02.305 Extended LBA Formats Supported: Supported 00:19:02.305 Flexible Data Placement Supported: Not Supported 00:19:02.305 00:19:02.305 Controller Memory Buffer Support 00:19:02.305 ================================ 00:19:02.305 Supported: No 00:19:02.305 00:19:02.305 Persistent Memory Region Support 00:19:02.305 ================================ 00:19:02.305 Supported: No 00:19:02.305 00:19:02.305 Admin Command Set Attributes 00:19:02.305 ============================ 00:19:02.305 Security Send/Receive: Not Supported 00:19:02.305 Format NVM: Supported 00:19:02.305 Firmware Activate/Download: Not Supported 00:19:02.305 Namespace Management: Supported 00:19:02.305 Device Self-Test: Not Supported 00:19:02.305 Directives: Supported 00:19:02.305 NVMe-MI: Not Supported 00:19:02.306 Virtualization Management: Not Supported 00:19:02.306 Doorbell Buffer Config: Supported 00:19:02.306 Get LBA Status Capability: Not Supported 00:19:02.306 Command & Feature Lockdown Capability: Not Supported 00:19:02.306 Abort Command Limit: 4 00:19:02.306 Async Event Request Limit: 4 00:19:02.306 Number of Firmware Slots: N/A 00:19:02.306 Firmware Slot 1 Read-Only: N/A 00:19:02.306 Firmware Activation Without Reset: N/A 00:19:02.306 Multiple Update Detection Support: N/A 00:19:02.306 Firmware Update Granularity: No Information Provided 00:19:02.306 Per-Namespace SMART Log: Yes 00:19:02.306 Asymmetric Namespace Access Log Page: Not Supported 00:19:02.306 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:02.306 Command Effects Log Page: Supported 00:19:02.306 Get Log Page Extended Data: Supported 00:19:02.306 Telemetry Log Pages: Not Supported 00:19:02.306 Persistent Event Log Pages: Not Supported 00:19:02.306 Supported Log Pages Log Page: May Support 00:19:02.306 Commands Supported & Effects Log Page: Not Supported 00:19:02.306 Feature Identifiers & Effects Log Page:May Support 00:19:02.306 NVMe-MI Commands & Effects Log Page: May Support 00:19:02.306 Data Area 4 for Telemetry Log: Not Supported 00:19:02.306 Error Log Page Entries Supported: 1 00:19:02.306 Keep Alive: Not Supported 00:19:02.306 00:19:02.306 NVM Command Set Attributes 00:19:02.306 ========================== 00:19:02.306 Submission Queue Entry Size 00:19:02.306 Max: 64 00:19:02.306 Min: 64 00:19:02.306 Completion Queue Entry Size 00:19:02.306 Max: 16 00:19:02.306 Min: 16 00:19:02.306 Number of Namespaces: 256 00:19:02.306 Compare Command: Supported 00:19:02.306 Write Uncorrectable Command: Not Supported 00:19:02.306 Dataset Management Command: Supported 00:19:02.306 Write Zeroes Command: Supported 00:19:02.306 Set Features Save Field: Supported 00:19:02.306 Reservations: Not Supported 00:19:02.306 Timestamp: Supported 00:19:02.306 Copy: Supported 00:19:02.306 Volatile Write Cache: Present 00:19:02.306 Atomic Write Unit (Normal): 1 00:19:02.306 Atomic Write Unit (PFail): 1 00:19:02.306 Atomic Compare & Write Unit: 1 00:19:02.306 Fused Compare & Write: Not Supported 00:19:02.306 Scatter-Gather List 00:19:02.306 SGL Command Set: Supported 00:19:02.306 SGL Keyed: Not Supported 00:19:02.306 SGL Bit Bucket Descriptor: Not Supported 00:19:02.306 SGL Metadata Pointer: Not Supported 00:19:02.306 Oversized SGL: Not Supported 00:19:02.306 SGL Metadata Address: Not Supported 00:19:02.306 SGL Offset: Not Supported 00:19:02.306 Transport SGL Data Block: Not Supported 00:19:02.306 Replay Protected Memory Block: Not Supported 00:19:02.306 00:19:02.306 Firmware Slot Information 00:19:02.306 ========================= 00:19:02.306 Active slot: 1 00:19:02.306 Slot 1 Firmware Revision: 1.0 00:19:02.306 00:19:02.306 00:19:02.306 Commands Supported and Effects 00:19:02.306 ============================== 00:19:02.306 Admin Commands 00:19:02.306 -------------- 00:19:02.306 Delete I/O Submission Queue (00h): Supported 00:19:02.306 Create I/O Submission Queue (01h): Supported 00:19:02.306 Get Log Page (02h): Supported 00:19:02.306 Delete I/O Completion Queue (04h): Supported 00:19:02.306 Create I/O Completion Queue (05h): Supported 00:19:02.306 Identify (06h): Supported 00:19:02.306 Abort (08h): Supported 00:19:02.306 Set Features (09h): Supported 00:19:02.306 Get Features (0Ah): Supported 00:19:02.306 Asynchronous Event Request (0Ch): Supported 00:19:02.306 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:02.306 Directive Send (19h): Supported 00:19:02.306 Directive Receive (1Ah): Supported 00:19:02.306 Virtualization Management (1Ch): Supported 00:19:02.306 Doorbell Buffer Config (7Ch): Supported 00:19:02.306 Format NVM (80h): Supported LBA-Change 00:19:02.306 I/O Commands 00:19:02.306 ------------ 00:19:02.306 Flush (00h): Supported LBA-Change 00:19:02.306 Write (01h): Supported LBA-Change 00:19:02.306 Read (02h): Supported 00:19:02.306 Compare (05h): Supported 00:19:02.306 Write Zeroes (08h): Supported LBA-Change 00:19:02.306 Dataset Management (09h): Supported LBA-Change 00:19:02.306 Unknown (0Ch): Supported 00:19:02.306 Unknown (12h): Supported 00:19:02.306 Copy (19h): Supported LBA-Change 00:19:02.306 Unknown (1Dh): Supported LBA-Change 00:19:02.306 00:19:02.306 Error Log 00:19:02.306 ========= 00:19:02.306 00:19:02.306 Arbitration 00:19:02.306 =========== 00:19:02.306 Arbitration Burst: no limit 00:19:02.306 00:19:02.306 Power Management 00:19:02.306 ================ 00:19:02.306 Number of Power States: 1 00:19:02.306 Current Power State: Power State #0 00:19:02.306 Power State #0: 00:19:02.306 Max Power: 25.00 W 00:19:02.306 Non-Operational State: Operational 00:19:02.306 Entry Latency: 16 microseconds 00:19:02.306 Exit Latency: 4 microseconds 00:19:02.306 Relative Read Throughput: 0 00:19:02.306 Relative Read Latency: 0 00:19:02.306 Relative Write Throughput: 0 00:19:02.306 Relative Write Latency: 0 00:19:02.565 Idle Power: Not Reported 00:19:02.565 Active Power: Not Reported 00:19:02.565 Non-Operational Permissive Mode: Not Supported 00:19:02.565 00:19:02.565 Health Information 00:19:02.565 ================== 00:19:02.565 Critical Warnings: 00:19:02.565 Available Spare Space: OK 00:19:02.565 Temperature: OK 00:19:02.565 Device Reliability: OK 00:19:02.565 Read Only: No 00:19:02.565 Volatile Memory Backup: OK 00:19:02.565 Current Temperature: 323 Kelvin (50 Celsius) 00:19:02.565 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:02.565 Available Spare: 0% 00:19:02.565 Available Spare Threshold: 0% 00:19:02.565 Life Percentage Used: 0% 00:19:02.565 Data Units Read: 11805 00:19:02.565 Data Units Written: 11790 00:19:02.565 Host Read Commands: 273163 00:19:02.565 Host Write Commands: 273012 00:19:02.565 Controller Busy Time: 0 minutes 00:19:02.565 Power Cycles: 0 00:19:02.565 Power On Hours: 0 hours 00:19:02.566 Unsafe Shutdowns: 0 00:19:02.566 Unrecoverable Media Errors: 0 00:19:02.566 Lifetime Error Log Entries: 0 00:19:02.566 Warning Temperature Time: 0 minutes 00:19:02.566 Critical Temperature Time: 0 minutes 00:19:02.566 00:19:02.566 Number of Queues 00:19:02.566 ================ 00:19:02.566 Number of I/O Submission Queues: 64 00:19:02.566 Number of I/O Completion Queues: 64 00:19:02.566 00:19:02.566 ZNS Specific Controller Data 00:19:02.566 ============================ 00:19:02.566 Zone Append Size Limit: 0 00:19:02.566 00:19:02.566 00:19:02.566 Active Namespaces 00:19:02.566 ================= 00:19:02.566 Namespace ID:1 00:19:02.566 Error Recovery Timeout: Unlimited 00:19:02.566 Command Set Identifier: NVM (00h) 00:19:02.566 Deallocate: Supported 00:19:02.566 Deallocated/Unwritten Error: Supported 00:19:02.566 Deallocated Read Value: All 0x00 00:19:02.566 Deallocate in Write Zeroes: Not Supported 00:19:02.566 Deallocated Guard Field: 0xFFFF 00:19:02.566 Flush: Supported 00:19:02.566 Reservation: Not Supported 00:19:02.566 Namespace Sharing Capabilities: Private 00:19:02.566 Size (in LBAs): 1310720 (5GiB) 00:19:02.566 Capacity (in LBAs): 1310720 (5GiB) 00:19:02.566 Utilization (in LBAs): 1310720 (5GiB) 00:19:02.566 Thin Provisioning: Not Supported 00:19:02.566 Per-NS Atomic Units: No 00:19:02.566 Maximum Single Source Range Length: 128 00:19:02.566 Maximum Copy Length: 128 00:19:02.566 Maximum Source Range Count: 128 00:19:02.566 NGUID/EUI64 Never Reused: No 00:19:02.566 Namespace Write Protected: No 00:19:02.566 Number of LBA Formats: 8 00:19:02.566 Current LBA Format: LBA Format #04 00:19:02.566 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:02.566 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:02.566 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:02.566 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:02.566 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:02.566 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:02.566 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:02.566 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:02.566 00:19:02.566 00:19:02.566 real 0m1.136s 00:19:02.566 user 0m0.072s 00:19:02.566 sys 0m1.076s 00:19:02.566 10:23:07 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:02.566 10:23:07 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:02.566 ************************************ 00:19:02.566 END TEST nvme_identify 00:19:02.566 ************************************ 00:19:02.566 10:23:07 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:02.566 10:23:07 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:02.566 10:23:07 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:02.566 10:23:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.566 ************************************ 00:19:02.566 START TEST nvme_perf 00:19:02.566 ************************************ 00:19:02.566 10:23:07 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # nvme_perf 00:19:02.566 10:23:07 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:03.134 EAL: TSC is not safe to use in SMP mode 00:19:03.134 EAL: TSC is not invariant 00:19:03.134 [2024-06-10 10:23:08.457550] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:04.070 Initializing NVMe Controllers 00:19:04.070 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:04.070 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:04.070 Initialization complete. Launching workers. 00:19:04.070 ======================================================== 00:19:04.070 Latency(us) 00:19:04.070 Device Information : IOPS MiB/s Average min max 00:19:04.070 PCIE (0000:00:10.0) NSID 1 from core 0: 87989.18 1031.12 1454.68 713.20 5335.65 00:19:04.070 ======================================================== 00:19:04.070 Total : 87989.18 1031.12 1454.68 713.20 5335.65 00:19:04.070 00:19:04.070 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:04.070 ================================================================================= 00:19:04.070 1.00000% : 1170.286us 00:19:04.070 10.00000% : 1279.512us 00:19:04.070 25.00000% : 1341.927us 00:19:04.070 50.00000% : 1427.748us 00:19:04.070 75.00000% : 1544.777us 00:19:04.070 90.00000% : 1654.004us 00:19:04.070 95.00000% : 1732.023us 00:19:04.070 98.00000% : 1856.853us 00:19:04.070 99.00000% : 1942.674us 00:19:04.070 99.50000% : 2012.891us 00:19:04.070 99.90000% : 4837.180us 00:19:04.070 99.99000% : 5305.294us 00:19:04.070 99.99900% : 5336.502us 00:19:04.070 99.99990% : 5336.502us 00:19:04.070 99.99999% : 5336.502us 00:19:04.070 00:19:04.070 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:04.070 ============================================================================== 00:19:04.070 Range in us Cumulative IO count 00:19:04.070 709.973 - 713.874: 0.0011% ( 1) 00:19:04.070 717.775 - 721.676: 0.0057% ( 4) 00:19:04.070 721.676 - 725.577: 0.0091% ( 3) 00:19:04.070 725.577 - 729.478: 0.0125% ( 3) 00:19:04.070 850.407 - 854.308: 0.0148% ( 2) 00:19:04.070 854.308 - 858.209: 0.0182% ( 3) 00:19:04.070 858.209 - 862.110: 0.0205% ( 2) 00:19:04.070 862.110 - 866.011: 0.0250% ( 4) 00:19:04.070 866.011 - 869.912: 0.0273% ( 2) 00:19:04.070 869.912 - 873.813: 0.0307% ( 3) 00:19:04.070 873.813 - 877.714: 0.0330% ( 2) 00:19:04.070 877.714 - 881.615: 0.0364% ( 3) 00:19:04.070 881.615 - 885.516: 0.0386% ( 2) 00:19:04.070 885.516 - 889.417: 0.0420% ( 3) 00:19:04.070 889.417 - 893.318: 0.0455% ( 3) 00:19:04.070 893.318 - 897.219: 0.0477% ( 2) 00:19:04.070 897.219 - 901.120: 0.0511% ( 3) 00:19:04.070 901.120 - 905.021: 0.0534% ( 2) 00:19:04.070 905.021 - 908.922: 0.0568% ( 3) 00:19:04.070 908.922 - 912.823: 0.0591% ( 2) 00:19:04.070 912.823 - 916.724: 0.0625% ( 3) 00:19:04.070 916.724 - 920.625: 0.0659% ( 3) 00:19:04.070 920.625 - 924.526: 0.0682% ( 2) 00:19:04.070 924.526 - 928.427: 0.0705% ( 2) 00:19:04.070 1014.247 - 1022.049: 0.0739% ( 3) 00:19:04.070 1022.049 - 1029.851: 0.0773% ( 3) 00:19:04.070 1029.851 - 1037.653: 0.0830% ( 5) 00:19:04.070 1037.653 - 1045.455: 0.0921% ( 8) 00:19:04.070 1045.455 - 1053.257: 0.1011% ( 8) 00:19:04.070 1053.257 - 1061.059: 0.1102% ( 8) 00:19:04.070 1061.059 - 1068.861: 0.1193% ( 8) 00:19:04.070 1068.861 - 1076.663: 0.1318% ( 11) 00:19:04.070 1076.663 - 1084.465: 0.1489% ( 15) 00:19:04.070 1084.465 - 1092.266: 0.1739% ( 22) 00:19:04.070 1092.266 - 1100.068: 0.2080% ( 30) 00:19:04.070 1100.068 - 1107.870: 0.2500% ( 37) 00:19:04.070 1107.870 - 1115.672: 0.3057% ( 49) 00:19:04.070 1115.672 - 1123.474: 0.3773% ( 63) 00:19:04.070 1123.474 - 1131.276: 0.4489% ( 63) 00:19:04.070 1131.276 - 1139.078: 0.5512% ( 90) 00:19:04.070 1139.078 - 1146.880: 0.6705% ( 105) 00:19:04.070 1146.880 - 1154.682: 0.8092% ( 122) 00:19:04.070 1154.682 - 1162.484: 0.9524% ( 126) 00:19:04.070 1162.484 - 1170.286: 1.1251% ( 152) 00:19:04.070 1170.286 - 1178.087: 1.3228% ( 174) 00:19:04.070 1178.087 - 1185.889: 1.5547% ( 204) 00:19:04.070 1185.889 - 1193.691: 1.8195% ( 233) 00:19:04.070 1193.691 - 1201.493: 2.1309% ( 274) 00:19:04.070 1201.493 - 1209.295: 2.4911% ( 317) 00:19:04.070 1209.295 - 1217.097: 2.9219% ( 379) 00:19:04.070 1217.097 - 1224.899: 3.4412% ( 457) 00:19:04.070 1224.899 - 1232.701: 4.0265% ( 515) 00:19:04.070 1232.701 - 1240.503: 4.7436% ( 631) 00:19:04.070 1240.503 - 1248.305: 5.5687% ( 726) 00:19:04.070 1248.305 - 1256.106: 6.5608% ( 873) 00:19:04.070 1256.106 - 1263.908: 7.6802% ( 985) 00:19:04.070 1263.908 - 1271.710: 8.9110% ( 1083) 00:19:04.070 1271.710 - 1279.512: 10.3168% ( 1237) 00:19:04.070 1279.512 - 1287.314: 11.8318% ( 1333) 00:19:04.070 1287.314 - 1295.116: 13.5115% ( 1478) 00:19:04.070 1295.116 - 1302.918: 15.2866% ( 1562) 00:19:04.070 1302.918 - 1310.720: 17.1993% ( 1683) 00:19:04.070 1310.720 - 1318.522: 19.1904% ( 1752) 00:19:04.070 1318.522 - 1326.324: 21.2940% ( 1851) 00:19:04.070 1326.324 - 1334.125: 23.4703% ( 1915) 00:19:04.070 1334.125 - 1341.927: 25.7285% ( 1987) 00:19:04.070 1341.927 - 1349.729: 28.0400% ( 2034) 00:19:04.070 1349.729 - 1357.531: 30.4198% ( 2094) 00:19:04.070 1357.531 - 1365.333: 32.7598% ( 2059) 00:19:04.070 1365.333 - 1373.135: 35.1168% ( 2074) 00:19:04.070 1373.135 - 1380.937: 37.4420% ( 2046) 00:19:04.070 1380.937 - 1388.739: 39.7548% ( 2035) 00:19:04.070 1388.739 - 1396.541: 42.0356% ( 2007) 00:19:04.070 1396.541 - 1404.343: 44.2415% ( 1941) 00:19:04.070 1404.343 - 1412.145: 46.4054% ( 1904) 00:19:04.070 1412.145 - 1419.946: 48.4874% ( 1832) 00:19:04.070 1419.946 - 1427.748: 50.4750% ( 1749) 00:19:04.070 1427.748 - 1435.550: 52.4366% ( 1726) 00:19:04.070 1435.550 - 1443.352: 54.3697% ( 1701) 00:19:04.070 1443.352 - 1451.154: 56.2062% ( 1616) 00:19:04.070 1451.154 - 1458.956: 57.9871% ( 1567) 00:19:04.070 1458.956 - 1466.758: 59.7247% ( 1529) 00:19:04.070 1466.758 - 1474.560: 61.3988% ( 1473) 00:19:04.070 1474.560 - 1482.362: 63.0535% ( 1456) 00:19:04.070 1482.362 - 1490.164: 64.6752% ( 1427) 00:19:04.070 1490.164 - 1497.965: 66.2663% ( 1400) 00:19:04.070 1497.965 - 1505.767: 67.8062% ( 1355) 00:19:04.070 1505.767 - 1513.569: 69.2938% ( 1309) 00:19:04.070 1513.569 - 1521.371: 70.7689% ( 1298) 00:19:04.070 1521.371 - 1529.173: 72.2384% ( 1293) 00:19:04.070 1529.173 - 1536.975: 73.6874% ( 1275) 00:19:04.070 1536.975 - 1544.777: 75.1375% ( 1276) 00:19:04.070 1544.777 - 1552.579: 76.5365% ( 1231) 00:19:04.070 1552.579 - 1560.381: 77.8571% ( 1162) 00:19:04.070 1560.381 - 1568.183: 79.1538% ( 1141) 00:19:04.070 1568.183 - 1575.984: 80.4607% ( 1150) 00:19:04.070 1575.984 - 1583.786: 81.7427% ( 1128) 00:19:04.070 1583.786 - 1591.588: 82.9996% ( 1106) 00:19:04.070 1591.588 - 1599.390: 84.1577% ( 1019) 00:19:04.070 1599.390 - 1607.192: 85.2896% ( 996) 00:19:04.070 1607.192 - 1614.994: 86.3613% ( 943) 00:19:04.070 1614.994 - 1622.796: 87.3579% ( 877) 00:19:04.070 1622.796 - 1630.598: 88.2614% ( 795) 00:19:04.070 1630.598 - 1638.400: 89.0763% ( 717) 00:19:04.070 1638.400 - 1646.202: 89.8343% ( 667) 00:19:04.070 1646.202 - 1654.004: 90.5526% ( 632) 00:19:04.070 1654.004 - 1661.805: 91.1958% ( 566) 00:19:04.070 1661.805 - 1669.607: 91.8004% ( 532) 00:19:04.070 1669.607 - 1677.409: 92.3584% ( 491) 00:19:04.070 1677.409 - 1685.211: 92.8528% ( 435) 00:19:04.070 1685.211 - 1693.013: 93.3244% ( 415) 00:19:04.070 1693.013 - 1700.815: 93.7517% ( 376) 00:19:04.070 1700.815 - 1708.617: 94.1461% ( 347) 00:19:04.070 1708.617 - 1716.419: 94.4927% ( 305) 00:19:04.070 1716.419 - 1724.221: 94.8063% ( 276) 00:19:04.070 1724.221 - 1732.023: 95.0848% ( 245) 00:19:04.070 1732.023 - 1739.824: 95.3598% ( 242) 00:19:04.070 1739.824 - 1747.626: 95.6257% ( 234) 00:19:04.070 1747.626 - 1755.428: 95.8644% ( 210) 00:19:04.070 1755.428 - 1763.230: 96.0860% ( 195) 00:19:04.070 1763.230 - 1771.032: 96.3076% ( 195) 00:19:04.070 1771.032 - 1778.834: 96.5145% ( 182) 00:19:04.070 1778.834 - 1786.636: 96.7122% ( 174) 00:19:04.070 1786.636 - 1794.438: 96.9099% ( 174) 00:19:04.070 1794.438 - 1802.240: 97.0941% ( 162) 00:19:04.070 1802.240 - 1810.042: 97.2736% ( 158) 00:19:04.070 1810.042 - 1817.844: 97.4327% ( 140) 00:19:04.070 1817.844 - 1825.645: 97.5839% ( 133) 00:19:04.070 1825.645 - 1833.447: 97.7316% ( 130) 00:19:04.070 1833.447 - 1841.249: 97.8691% ( 121) 00:19:04.070 1841.249 - 1849.051: 97.9964% ( 112) 00:19:04.070 1849.051 - 1856.853: 98.1112% ( 101) 00:19:04.070 1856.853 - 1864.655: 98.2214% ( 97) 00:19:04.070 1864.655 - 1872.457: 98.3271% ( 93) 00:19:04.070 1872.457 - 1880.259: 98.4294% ( 90) 00:19:04.070 1880.259 - 1888.061: 98.5215% ( 81) 00:19:04.071 1888.061 - 1895.863: 98.6056% ( 74) 00:19:04.071 1895.863 - 1903.664: 98.6794% ( 65) 00:19:04.071 1903.664 - 1911.466: 98.7465% ( 59) 00:19:04.071 1911.466 - 1919.268: 98.8135% ( 59) 00:19:04.071 1919.268 - 1927.070: 98.8863% ( 64) 00:19:04.071 1927.070 - 1934.872: 98.9590% ( 64) 00:19:04.071 1934.872 - 1942.674: 99.0374% ( 69) 00:19:04.071 1942.674 - 1950.476: 99.1067% ( 61) 00:19:04.071 1950.476 - 1958.278: 99.1715% ( 57) 00:19:04.071 1958.278 - 1966.080: 99.2249% ( 47) 00:19:04.071 1966.080 - 1973.882: 99.2738% ( 43) 00:19:04.071 1973.882 - 1981.683: 99.3193% ( 40) 00:19:04.071 1981.683 - 1989.485: 99.3693% ( 44) 00:19:04.071 1989.485 - 1997.287: 99.4159% ( 41) 00:19:04.071 1997.287 - 2012.891: 99.5034% ( 77) 00:19:04.071 2012.891 - 2028.495: 99.5681% ( 57) 00:19:04.071 2028.495 - 2044.099: 99.6329% ( 57) 00:19:04.071 2044.099 - 2059.703: 99.7000% ( 59) 00:19:04.071 2059.703 - 2075.306: 99.7523% ( 46) 00:19:04.071 2075.306 - 2090.910: 99.7863% ( 30) 00:19:04.071 2090.910 - 2106.514: 99.8091% ( 20) 00:19:04.071 2106.514 - 2122.118: 99.8261% ( 15) 00:19:04.071 2122.118 - 2137.722: 99.8398% ( 12) 00:19:04.071 2137.722 - 2153.325: 99.8477% ( 7) 00:19:04.071 2871.100 - 2886.704: 99.8488% ( 1) 00:19:04.071 2964.723 - 2980.327: 99.8500% ( 1) 00:19:04.071 2980.327 - 2995.931: 99.8511% ( 1) 00:19:04.071 3042.742 - 3058.346: 99.8523% ( 1) 00:19:04.071 3073.950 - 3089.554: 99.8534% ( 1) 00:19:04.071 3136.365 - 3151.969: 99.8545% ( 1) 00:19:04.071 4431.481 - 4462.689: 99.8568% ( 2) 00:19:04.071 4462.689 - 4493.896: 99.8625% ( 5) 00:19:04.071 4493.896 - 4525.104: 99.8670% ( 4) 00:19:04.071 4649.934 - 4681.142: 99.8693% ( 2) 00:19:04.071 4681.142 - 4712.350: 99.8750% ( 5) 00:19:04.071 4712.350 - 4743.557: 99.8818% ( 6) 00:19:04.071 4743.557 - 4774.765: 99.8886% ( 6) 00:19:04.071 4774.765 - 4805.973: 99.8954% ( 6) 00:19:04.071 4805.973 - 4837.180: 99.9011% ( 5) 00:19:04.071 4837.180 - 4868.388: 99.9079% ( 6) 00:19:04.071 4868.388 - 4899.595: 99.9125% ( 4) 00:19:04.071 4899.595 - 4930.803: 99.9193% ( 6) 00:19:04.071 4930.803 - 4962.011: 99.9261% ( 6) 00:19:04.071 4962.011 - 4993.218: 99.9318% ( 5) 00:19:04.071 4993.218 - 5024.426: 99.9375% ( 5) 00:19:04.071 5024.426 - 5055.633: 99.9443% ( 6) 00:19:04.071 5055.633 - 5086.841: 99.9511% ( 6) 00:19:04.071 5086.841 - 5118.049: 99.9568% ( 5) 00:19:04.071 5118.049 - 5149.256: 99.9636% ( 6) 00:19:04.071 5149.256 - 5180.464: 99.9693% ( 5) 00:19:04.071 5180.464 - 5211.672: 99.9761% ( 6) 00:19:04.071 5211.672 - 5242.879: 99.9818% ( 5) 00:19:04.071 5242.879 - 5274.087: 99.9886% ( 6) 00:19:04.071 5274.087 - 5305.294: 99.9943% ( 5) 00:19:04.071 5305.294 - 5336.502: 100.0000% ( 5) 00:19:04.071 00:19:04.071 10:23:09 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:05.007 EAL: TSC is not safe to use in SMP mode 00:19:05.007 EAL: TSC is not invariant 00:19:05.007 [2024-06-10 10:23:10.304328] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:05.943 Initializing NVMe Controllers 00:19:05.943 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:05.943 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:05.943 Initialization complete. Launching workers. 00:19:05.943 ======================================================== 00:19:05.943 Latency(us) 00:19:05.943 Device Information : IOPS MiB/s Average min max 00:19:05.943 PCIE (0000:00:10.0) NSID 1 from core 0: 71063.83 832.78 1801.59 364.46 4850.47 00:19:05.943 ======================================================== 00:19:05.943 Total : 71063.83 832.78 1801.59 364.46 4850.47 00:19:05.943 00:19:05.943 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:05.943 ================================================================================= 00:19:05.943 1.00000% : 1310.720us 00:19:05.943 10.00000% : 1529.173us 00:19:05.943 25.00000% : 1622.796us 00:19:05.943 50.00000% : 1716.419us 00:19:05.943 75.00000% : 1895.863us 00:19:05.943 90.00000% : 2293.760us 00:19:05.944 95.00000% : 2449.798us 00:19:05.944 98.00000% : 2590.232us 00:19:05.944 99.00000% : 2715.062us 00:19:05.944 99.50000% : 3073.950us 00:19:05.944 99.90000% : 4337.858us 00:19:05.944 99.99000% : 4805.973us 00:19:05.944 99.99900% : 4868.388us 00:19:05.944 99.99990% : 4868.388us 00:19:05.944 99.99999% : 4868.388us 00:19:05.944 00:19:05.944 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:05.944 ============================================================================== 00:19:05.944 Range in us Cumulative IO count 00:19:05.944 362.789 - 364.739: 0.0014% ( 1) 00:19:05.944 370.590 - 372.541: 0.0042% ( 2) 00:19:05.944 372.541 - 374.491: 0.0070% ( 2) 00:19:05.944 374.491 - 376.442: 0.0084% ( 1) 00:19:05.944 393.996 - 395.947: 0.0098% ( 1) 00:19:05.944 395.947 - 397.897: 0.0141% ( 3) 00:19:05.944 397.897 - 399.848: 0.0155% ( 1) 00:19:05.944 399.848 - 401.798: 0.0169% ( 1) 00:19:05.944 403.749 - 405.699: 0.0197% ( 2) 00:19:05.944 405.699 - 407.649: 0.0211% ( 1) 00:19:05.944 407.649 - 409.600: 0.0225% ( 1) 00:19:05.944 409.600 - 411.550: 0.0239% ( 1) 00:19:05.944 550.034 - 553.935: 0.0267% ( 2) 00:19:05.944 553.935 - 557.836: 0.0338% ( 5) 00:19:05.944 557.836 - 561.737: 0.0394% ( 4) 00:19:05.944 561.737 - 565.638: 0.0450% ( 4) 00:19:05.944 565.638 - 569.539: 0.0492% ( 3) 00:19:05.944 569.539 - 573.440: 0.0507% ( 1) 00:19:05.944 573.440 - 577.341: 0.0549% ( 3) 00:19:05.944 577.341 - 581.242: 0.0577% ( 2) 00:19:05.944 581.242 - 585.143: 0.0605% ( 2) 00:19:05.944 585.143 - 589.044: 0.0619% ( 1) 00:19:05.944 971.337 - 975.238: 0.0633% ( 1) 00:19:05.944 979.139 - 983.040: 0.0661% ( 2) 00:19:05.944 983.040 - 986.941: 0.0704% ( 3) 00:19:05.944 986.941 - 990.842: 0.0718% ( 1) 00:19:05.944 990.842 - 994.743: 0.0788% ( 5) 00:19:05.944 994.743 - 998.644: 0.0830% ( 3) 00:19:05.944 998.644 - 1006.446: 0.0929% ( 7) 00:19:05.944 1006.446 - 1014.247: 0.1196% ( 19) 00:19:05.944 1014.247 - 1022.049: 0.1351% ( 11) 00:19:05.944 1022.049 - 1029.851: 0.1463% ( 8) 00:19:05.944 1029.851 - 1037.653: 0.1688% ( 16) 00:19:05.944 1037.653 - 1045.455: 0.1773% ( 6) 00:19:05.944 1045.455 - 1053.257: 0.1871% ( 7) 00:19:05.944 1053.257 - 1061.059: 0.2012% ( 10) 00:19:05.944 1061.059 - 1068.861: 0.2181% ( 12) 00:19:05.944 1068.861 - 1076.663: 0.2336% ( 11) 00:19:05.944 1076.663 - 1084.465: 0.2490% ( 11) 00:19:05.944 1084.465 - 1092.266: 0.2730% ( 17) 00:19:05.944 1092.266 - 1100.068: 0.2800% ( 5) 00:19:05.944 1100.068 - 1107.870: 0.2856% ( 4) 00:19:05.944 1107.870 - 1115.672: 0.2941% ( 6) 00:19:05.944 1115.672 - 1123.474: 0.3011% ( 5) 00:19:05.944 1123.474 - 1131.276: 0.3095% ( 6) 00:19:05.944 1131.276 - 1139.078: 0.3180% ( 6) 00:19:05.944 1139.078 - 1146.880: 0.3236% ( 4) 00:19:05.944 1146.880 - 1154.682: 0.3292% ( 4) 00:19:05.944 1154.682 - 1162.484: 0.3321% ( 2) 00:19:05.944 1162.484 - 1170.286: 0.3363% ( 3) 00:19:05.944 1170.286 - 1178.087: 0.3391% ( 2) 00:19:05.944 1178.087 - 1185.889: 0.3433% ( 3) 00:19:05.944 1185.889 - 1193.691: 0.3489% ( 4) 00:19:05.944 1193.691 - 1201.493: 0.3574% ( 6) 00:19:05.944 1201.493 - 1209.295: 0.3785% ( 15) 00:19:05.944 1209.295 - 1217.097: 0.3940% ( 11) 00:19:05.944 1217.097 - 1224.899: 0.4151% ( 15) 00:19:06.203 1224.899 - 1232.701: 0.4404% ( 18) 00:19:06.203 1232.701 - 1240.503: 0.4643% ( 17) 00:19:06.203 1240.503 - 1248.305: 0.4911% ( 19) 00:19:06.203 1248.305 - 1256.106: 0.5290% ( 27) 00:19:06.203 1256.106 - 1263.908: 0.5867% ( 41) 00:19:06.203 1263.908 - 1271.710: 0.6402% ( 38) 00:19:06.203 1271.710 - 1279.512: 0.7091% ( 49) 00:19:06.203 1279.512 - 1287.314: 0.7696% ( 43) 00:19:06.203 1287.314 - 1295.116: 0.8428% ( 52) 00:19:06.203 1295.116 - 1302.918: 0.9329% ( 64) 00:19:06.203 1302.918 - 1310.720: 1.0313% ( 70) 00:19:06.203 1310.720 - 1318.522: 1.1481% ( 83) 00:19:06.203 1318.522 - 1326.324: 1.2762% ( 91) 00:19:06.203 1326.324 - 1334.125: 1.4394% ( 116) 00:19:06.203 1334.125 - 1341.927: 1.5773% ( 98) 00:19:06.203 1341.927 - 1349.729: 1.7377% ( 114) 00:19:06.203 1349.729 - 1357.531: 1.9206% ( 130) 00:19:06.203 1357.531 - 1365.333: 2.1133% ( 137) 00:19:06.203 1365.333 - 1373.135: 2.2822% ( 120) 00:19:06.203 1373.135 - 1380.937: 2.5017% ( 156) 00:19:06.203 1380.937 - 1388.739: 2.6874% ( 132) 00:19:06.203 1388.739 - 1396.541: 2.8844% ( 140) 00:19:06.203 1396.541 - 1404.343: 3.1560% ( 193) 00:19:06.203 1404.343 - 1412.145: 3.4514% ( 210) 00:19:06.203 1412.145 - 1419.946: 3.8018% ( 249) 00:19:06.204 1419.946 - 1427.748: 4.1254% ( 230) 00:19:06.204 1427.748 - 1435.550: 4.4279% ( 215) 00:19:06.204 1435.550 - 1443.352: 4.7543% ( 232) 00:19:06.204 1443.352 - 1451.154: 5.0554% ( 214) 00:19:06.204 1451.154 - 1458.956: 5.4128% ( 254) 00:19:06.204 1458.956 - 1466.758: 5.8082% ( 281) 00:19:06.204 1466.758 - 1474.560: 6.2444% ( 310) 00:19:06.204 1474.560 - 1482.362: 6.7889% ( 387) 00:19:06.204 1482.362 - 1490.164: 7.3531% ( 401) 00:19:06.204 1490.164 - 1497.965: 7.9187% ( 402) 00:19:06.204 1497.965 - 1505.767: 8.4858% ( 403) 00:19:06.204 1505.767 - 1513.569: 9.0612% ( 409) 00:19:06.204 1513.569 - 1521.371: 9.6944% ( 450) 00:19:06.204 1521.371 - 1529.173: 10.4936% ( 568) 00:19:06.204 1529.173 - 1536.975: 11.3448% ( 605) 00:19:06.204 1536.975 - 1544.777: 12.1426% ( 567) 00:19:06.204 1544.777 - 1552.579: 13.1191% ( 694) 00:19:06.204 1552.579 - 1560.381: 14.1040% ( 700) 00:19:06.204 1560.381 - 1568.183: 15.3436% ( 881) 00:19:06.204 1568.183 - 1575.984: 16.5832% ( 881) 00:19:06.204 1575.984 - 1583.786: 18.0788% ( 1063) 00:19:06.204 1583.786 - 1591.588: 19.6688% ( 1130) 00:19:06.204 1591.588 - 1599.390: 21.2517% ( 1125) 00:19:06.204 1599.390 - 1607.192: 23.0175% ( 1255) 00:19:06.204 1607.192 - 1614.994: 24.9353% ( 1363) 00:19:06.204 1614.994 - 1622.796: 26.9628% ( 1441) 00:19:06.204 1622.796 - 1630.598: 28.9045% ( 1380) 00:19:06.204 1630.598 - 1638.400: 31.0136% ( 1499) 00:19:06.204 1638.400 - 1646.202: 32.9849% ( 1401) 00:19:06.204 1646.202 - 1654.004: 35.0476% ( 1466) 00:19:06.204 1654.004 - 1661.805: 37.1919% ( 1524) 00:19:06.204 1661.805 - 1669.607: 39.1659% ( 1403) 00:19:06.204 1669.607 - 1677.409: 41.1639% ( 1420) 00:19:06.204 1677.409 - 1685.211: 43.0690% ( 1354) 00:19:06.204 1685.211 - 1693.013: 45.0191% ( 1386) 00:19:06.204 1693.013 - 1700.815: 46.7456% ( 1227) 00:19:06.204 1700.815 - 1708.617: 48.5001% ( 1247) 00:19:06.204 1708.617 - 1716.419: 50.1407% ( 1166) 00:19:06.204 1716.419 - 1724.221: 51.6715% ( 1088) 00:19:06.204 1724.221 - 1732.023: 53.2319% ( 1109) 00:19:06.204 1732.023 - 1739.824: 54.6882% ( 1035) 00:19:06.204 1739.824 - 1747.626: 56.0826% ( 991) 00:19:06.204 1747.626 - 1755.428: 57.5754% ( 1061) 00:19:06.204 1755.428 - 1763.230: 59.0753% ( 1066) 00:19:06.204 1763.230 - 1771.032: 60.4823% ( 1000) 00:19:06.204 1771.032 - 1778.834: 61.8725% ( 988) 00:19:06.204 1778.834 - 1786.636: 63.0966% ( 870) 00:19:06.204 1786.636 - 1794.438: 64.3108% ( 863) 00:19:06.204 1794.438 - 1802.240: 65.4407% ( 803) 00:19:06.204 1802.240 - 1810.042: 66.5396% ( 781) 00:19:06.204 1810.042 - 1817.844: 67.5878% ( 745) 00:19:06.204 1817.844 - 1825.645: 68.6037% ( 722) 00:19:06.204 1825.645 - 1833.447: 69.5225% ( 653) 00:19:06.204 1833.447 - 1841.249: 70.3399% ( 581) 00:19:06.204 1841.249 - 1849.051: 71.1884% ( 603) 00:19:06.204 1849.051 - 1856.853: 71.9341% ( 530) 00:19:06.204 1856.853 - 1864.655: 72.6657% ( 520) 00:19:06.204 1864.655 - 1872.457: 73.4269% ( 541) 00:19:06.204 1872.457 - 1880.259: 74.1319% ( 501) 00:19:06.204 1880.259 - 1888.061: 74.7678% ( 452) 00:19:06.204 1888.061 - 1895.863: 75.4460% ( 482) 00:19:06.204 1895.863 - 1903.664: 76.0693% ( 443) 00:19:06.204 1903.664 - 1911.466: 76.6476% ( 411) 00:19:06.204 1911.466 - 1919.268: 77.1823% ( 380) 00:19:06.204 1919.268 - 1927.070: 77.6832% ( 356) 00:19:06.204 1927.070 - 1934.872: 78.1644% ( 342) 00:19:06.204 1934.872 - 1942.674: 78.6583% ( 351) 00:19:06.204 1942.674 - 1950.476: 79.0818% ( 301) 00:19:06.204 1950.476 - 1958.278: 79.4575% ( 267) 00:19:06.204 1958.278 - 1966.080: 79.8022% ( 245) 00:19:06.204 1966.080 - 1973.882: 80.1567% ( 252) 00:19:06.204 1973.882 - 1981.683: 80.4958% ( 241) 00:19:06.204 1981.683 - 1989.485: 80.8279% ( 236) 00:19:06.204 1989.485 - 1997.287: 81.1698% ( 243) 00:19:06.204 1997.287 - 2012.891: 81.8311% ( 470) 00:19:06.204 2012.891 - 2028.495: 82.4882% ( 467) 00:19:06.204 2028.495 - 2044.099: 83.1410% ( 464) 00:19:06.204 2044.099 - 2059.703: 83.7320% ( 420) 00:19:06.204 2059.703 - 2075.306: 84.2779% ( 388) 00:19:06.204 2075.306 - 2090.910: 84.8196% ( 385) 00:19:06.204 2090.910 - 2106.514: 85.3458% ( 374) 00:19:06.204 2106.514 - 2122.118: 85.9269% ( 413) 00:19:06.204 2122.118 - 2137.722: 86.4518% ( 373) 00:19:06.204 2137.722 - 2153.325: 86.8697% ( 297) 00:19:06.204 2153.325 - 2168.929: 87.2946% ( 302) 00:19:06.204 2168.929 - 2184.533: 87.6970% ( 286) 00:19:06.204 2184.533 - 2200.137: 88.1064% ( 291) 00:19:06.204 2200.137 - 2215.741: 88.4849% ( 269) 00:19:06.204 2215.741 - 2231.344: 88.8325% ( 247) 00:19:06.204 2231.344 - 2246.948: 89.1842% ( 250) 00:19:06.204 2246.948 - 2262.552: 89.4980% ( 223) 00:19:06.204 2262.552 - 2278.156: 89.7963% ( 212) 00:19:06.204 2278.156 - 2293.760: 90.0523% ( 182) 00:19:06.204 2293.760 - 2309.363: 90.3042% ( 179) 00:19:06.204 2309.363 - 2324.967: 90.6208% ( 225) 00:19:06.204 2324.967 - 2340.571: 91.0781% ( 325) 00:19:06.204 2340.571 - 2356.175: 91.4931% ( 295) 00:19:06.204 2356.175 - 2371.779: 92.0517% ( 397) 00:19:06.204 2371.779 - 2387.382: 92.6061% ( 394) 00:19:06.204 2387.382 - 2402.986: 93.2055% ( 426) 00:19:06.204 2402.986 - 2418.590: 93.8597% ( 465) 00:19:06.204 2418.590 - 2434.194: 94.4901% ( 448) 00:19:06.204 2434.194 - 2449.798: 95.0768% ( 417) 00:19:06.204 2449.798 - 2465.401: 95.6692% ( 421) 00:19:06.204 2465.401 - 2481.005: 96.1926% ( 372) 00:19:06.204 2481.005 - 2496.609: 96.6091% ( 296) 00:19:06.204 2496.609 - 2512.213: 96.9608% ( 250) 00:19:06.204 2512.213 - 2527.817: 97.3070% ( 246) 00:19:06.204 2527.817 - 2543.421: 97.5673% ( 185) 00:19:06.204 2543.421 - 2559.024: 97.8022% ( 167) 00:19:06.204 2559.024 - 2574.628: 97.9922% ( 135) 00:19:06.204 2574.628 - 2590.232: 98.1512% ( 113) 00:19:06.204 2590.232 - 2605.836: 98.3088% ( 112) 00:19:06.204 2605.836 - 2621.440: 98.4410% ( 94) 00:19:06.204 2621.440 - 2637.043: 98.5648% ( 88) 00:19:06.204 2637.043 - 2652.647: 98.6563% ( 65) 00:19:06.204 2652.647 - 2668.251: 98.7534% ( 69) 00:19:06.204 2668.251 - 2683.855: 98.8561% ( 73) 00:19:06.204 2683.855 - 2699.459: 98.9447% ( 63) 00:19:06.204 2699.459 - 2715.062: 99.0151% ( 50) 00:19:06.204 2715.062 - 2730.666: 99.0911% ( 54) 00:19:06.204 2730.666 - 2746.270: 99.1628% ( 51) 00:19:06.204 2746.270 - 2761.874: 99.2036% ( 29) 00:19:06.204 2761.874 - 2777.478: 99.2374% ( 24) 00:19:06.204 2777.478 - 2793.081: 99.2585% ( 15) 00:19:06.204 2793.081 - 2808.685: 99.2796% ( 15) 00:19:06.204 2808.685 - 2824.289: 99.3021% ( 16) 00:19:06.204 2824.289 - 2839.893: 99.3190% ( 12) 00:19:06.204 2839.893 - 2855.497: 99.3288% ( 7) 00:19:06.204 2855.497 - 2871.100: 99.3359% ( 5) 00:19:06.204 2871.100 - 2886.704: 99.3401% ( 3) 00:19:06.204 2886.704 - 2902.308: 99.3457% ( 4) 00:19:06.204 2902.308 - 2917.912: 99.3500% ( 3) 00:19:06.204 2917.912 - 2933.516: 99.3528% ( 2) 00:19:06.204 2949.120 - 2964.723: 99.3570% ( 3) 00:19:06.204 2964.723 - 2980.327: 99.3598% ( 2) 00:19:06.204 2980.327 - 2995.931: 99.4330% ( 52) 00:19:06.204 2995.931 - 3011.535: 99.4428% ( 7) 00:19:06.204 3011.535 - 3027.139: 99.4569% ( 10) 00:19:06.204 3027.139 - 3042.742: 99.4710% ( 10) 00:19:06.204 3042.742 - 3058.346: 99.4878% ( 12) 00:19:06.204 3058.346 - 3073.950: 99.5047% ( 12) 00:19:06.204 3073.950 - 3089.554: 99.5371% ( 23) 00:19:06.204 3089.554 - 3105.158: 99.5666% ( 21) 00:19:06.204 3105.158 - 3120.761: 99.5849% ( 13) 00:19:06.204 3120.761 - 3136.365: 99.6060% ( 15) 00:19:06.204 3136.365 - 3151.969: 99.6271% ( 15) 00:19:06.204 3151.969 - 3167.573: 99.6440% ( 12) 00:19:06.204 3167.573 - 3183.177: 99.6651% ( 15) 00:19:06.204 3183.177 - 3198.780: 99.6862% ( 15) 00:19:06.204 3198.780 - 3214.384: 99.7045% ( 13) 00:19:06.204 3214.384 - 3229.988: 99.7200% ( 11) 00:19:06.204 3229.988 - 3245.592: 99.7256% ( 4) 00:19:06.204 3245.592 - 3261.196: 99.7270% ( 1) 00:19:06.204 3308.007 - 3323.611: 99.7299% ( 2) 00:19:06.204 3323.611 - 3339.215: 99.7369% ( 5) 00:19:06.204 3339.215 - 3354.818: 99.7439% ( 5) 00:19:06.205 3354.818 - 3370.422: 99.7538% ( 7) 00:19:06.205 3370.422 - 3386.026: 99.7566% ( 2) 00:19:06.205 3417.234 - 3432.838: 99.7594% ( 2) 00:19:06.205 3479.649 - 3495.253: 99.7664% ( 5) 00:19:06.205 3557.668 - 3573.272: 99.7692% ( 2) 00:19:06.205 3635.687 - 3651.291: 99.7707% ( 1) 00:19:06.205 3651.291 - 3666.895: 99.7749% ( 3) 00:19:06.205 3682.498 - 3698.102: 99.7777% ( 2) 00:19:06.205 3698.102 - 3713.706: 99.7791% ( 1) 00:19:06.205 3713.706 - 3729.310: 99.7861% ( 5) 00:19:06.205 3729.310 - 3744.914: 99.7875% ( 1) 00:19:06.205 3744.914 - 3760.517: 99.7960% ( 6) 00:19:06.205 3760.517 - 3776.121: 99.8030% ( 5) 00:19:06.205 3791.725 - 3807.329: 99.8058% ( 2) 00:19:06.205 3885.348 - 3900.952: 99.8072% ( 1) 00:19:06.205 3932.159 - 3947.763: 99.8199% ( 9) 00:19:06.205 3947.763 - 3963.367: 99.8227% ( 2) 00:19:06.205 3963.367 - 3978.971: 99.8283% ( 4) 00:19:06.205 3978.971 - 3994.575: 99.8340% ( 4) 00:19:06.205 3994.575 - 4025.782: 99.8438% ( 7) 00:19:06.205 4025.782 - 4056.990: 99.8494% ( 4) 00:19:06.205 4056.990 - 4088.197: 99.8551% ( 4) 00:19:06.205 4088.197 - 4119.405: 99.8621% ( 5) 00:19:06.205 4119.405 - 4150.613: 99.8677% ( 4) 00:19:06.205 4150.613 - 4181.820: 99.8734% ( 4) 00:19:06.205 4181.820 - 4213.028: 99.8790% ( 4) 00:19:06.205 4213.028 - 4244.235: 99.8846% ( 4) 00:19:06.205 4244.235 - 4275.443: 99.8903% ( 4) 00:19:06.205 4275.443 - 4306.651: 99.8959% ( 4) 00:19:06.205 4306.651 - 4337.858: 99.9029% ( 5) 00:19:06.205 4337.858 - 4369.066: 99.9085% ( 4) 00:19:06.205 4369.066 - 4400.274: 99.9142% ( 4) 00:19:06.205 4400.274 - 4431.481: 99.9212% ( 5) 00:19:06.205 4431.481 - 4462.689: 99.9254% ( 3) 00:19:06.205 4462.689 - 4493.896: 99.9325% ( 5) 00:19:06.205 4493.896 - 4525.104: 99.9367% ( 3) 00:19:06.205 4525.104 - 4556.312: 99.9423% ( 4) 00:19:06.205 4556.312 - 4587.519: 99.9493% ( 5) 00:19:06.205 4587.519 - 4618.727: 99.9550% ( 4) 00:19:06.205 4618.727 - 4649.934: 99.9606% ( 4) 00:19:06.205 4649.934 - 4681.142: 99.9662% ( 4) 00:19:06.205 4681.142 - 4712.350: 99.9733% ( 5) 00:19:06.205 4712.350 - 4743.557: 99.9789% ( 4) 00:19:06.205 4743.557 - 4774.765: 99.9859% ( 5) 00:19:06.205 4774.765 - 4805.973: 99.9930% ( 5) 00:19:06.205 4805.973 - 4837.180: 99.9972% ( 3) 00:19:06.205 4837.180 - 4868.388: 100.0000% ( 2) 00:19:06.205 00:19:06.463 10:23:11 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:06.463 00:19:06.463 real 0m3.872s 00:19:06.463 user 0m2.503s 00:19:06.463 sys 0m1.367s 00:19:06.463 10:23:11 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:06.463 10:23:11 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:06.463 ************************************ 00:19:06.463 END TEST nvme_perf 00:19:06.463 ************************************ 00:19:06.463 10:23:11 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:06.463 10:23:11 nvme -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:19:06.463 10:23:11 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:06.463 10:23:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:06.463 ************************************ 00:19:06.463 START TEST nvme_hello_world 00:19:06.463 ************************************ 00:19:06.463 10:23:11 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:07.031 EAL: TSC is not safe to use in SMP mode 00:19:07.031 EAL: TSC is not invariant 00:19:07.031 [2024-06-10 10:23:12.389483] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:07.031 Initializing NVMe Controllers 00:19:07.031 Attaching to 0000:00:10.0 00:19:07.031 Attached to 0000:00:10.0 00:19:07.031 Namespace ID: 1 size: 5GB 00:19:07.031 Initialization complete. 00:19:07.031 INFO: using host memory buffer for IO 00:19:07.031 Hello world! 00:19:07.031 00:19:07.031 real 0m0.533s 00:19:07.031 user 0m0.005s 00:19:07.031 sys 0m0.529s 00:19:07.031 ************************************ 00:19:07.031 END TEST nvme_hello_world 00:19:07.031 ************************************ 00:19:07.031 10:23:12 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:07.031 10:23:12 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:07.031 10:23:12 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:07.031 10:23:12 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:07.031 10:23:12 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:07.031 10:23:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.031 ************************************ 00:19:07.031 START TEST nvme_sgl 00:19:07.031 ************************************ 00:19:07.031 10:23:12 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:07.600 EAL: TSC is not safe to use in SMP mode 00:19:07.600 EAL: TSC is not invariant 00:19:07.600 [2024-06-10 10:23:12.960301] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:07.600 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:07.600 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:07.600 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:07.600 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:07.600 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:07.600 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:07.600 NVMe Readv/Writev Request test 00:19:07.600 Attaching to 0000:00:10.0 00:19:07.600 Attached to 0000:00:10.0 00:19:07.600 0000:00:10.0: build_io_request_2 test passed 00:19:07.600 0000:00:10.0: build_io_request_4 test passed 00:19:07.600 0000:00:10.0: build_io_request_5 test passed 00:19:07.600 0000:00:10.0: build_io_request_6 test passed 00:19:07.600 0000:00:10.0: build_io_request_7 test passed 00:19:07.600 0000:00:10.0: build_io_request_10 test passed 00:19:07.600 Cleaning up... 00:19:07.600 00:19:07.600 real 0m0.541s 00:19:07.600 user 0m0.032s 00:19:07.600 sys 0m0.508s 00:19:07.600 10:23:13 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:07.600 10:23:13 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:07.600 ************************************ 00:19:07.600 END TEST nvme_sgl 00:19:07.600 ************************************ 00:19:07.600 10:23:13 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:07.600 10:23:13 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:07.600 10:23:13 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:07.600 10:23:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:07.600 ************************************ 00:19:07.600 START TEST nvme_e2edp 00:19:07.600 ************************************ 00:19:07.600 10:23:13 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:08.168 EAL: TSC is not safe to use in SMP mode 00:19:08.168 EAL: TSC is not invariant 00:19:08.168 [2024-06-10 10:23:13.532151] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:08.168 NVMe Write/Read with End-to-End data protection test 00:19:08.168 Attaching to 0000:00:10.0 00:19:08.168 Attached to 0000:00:10.0 00:19:08.168 Cleaning up... 00:19:08.168 00:19:08.168 real 0m0.514s 00:19:08.168 user 0m0.018s 00:19:08.168 sys 0m0.495s 00:19:08.168 10:23:13 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:08.168 10:23:13 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:08.168 ************************************ 00:19:08.168 END TEST nvme_e2edp 00:19:08.168 ************************************ 00:19:08.168 10:23:13 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:08.168 10:23:13 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:08.168 10:23:13 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:08.168 10:23:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.168 ************************************ 00:19:08.168 START TEST nvme_reserve 00:19:08.168 ************************************ 00:19:08.168 10:23:13 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:09.103 EAL: TSC is not safe to use in SMP mode 00:19:09.103 EAL: TSC is not invariant 00:19:09.103 [2024-06-10 10:23:14.402551] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:09.103 ===================================================== 00:19:09.103 NVMe Controller at PCI bus 0, device 16, function 0 00:19:09.103 ===================================================== 00:19:09.103 Reservations: Not Supported 00:19:09.103 Reservation test passed 00:19:09.103 00:19:09.103 real 0m0.841s 00:19:09.103 user 0m0.013s 00:19:09.103 sys 0m0.827s 00:19:09.103 10:23:14 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:09.103 10:23:14 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:09.103 ************************************ 00:19:09.103 END TEST nvme_reserve 00:19:09.103 ************************************ 00:19:09.103 10:23:14 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:09.103 10:23:14 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:09.103 10:23:14 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:09.103 10:23:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.103 ************************************ 00:19:09.103 START TEST nvme_err_injection 00:19:09.103 ************************************ 00:19:09.103 10:23:14 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:09.670 EAL: TSC is not safe to use in SMP mode 00:19:09.670 EAL: TSC is not invariant 00:19:09.670 [2024-06-10 10:23:14.999843] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:09.670 NVMe Error Injection test 00:19:09.670 Attaching to 0000:00:10.0 00:19:09.670 Attached to 0000:00:10.0 00:19:09.670 0000:00:10.0: get features failed as expected 00:19:09.670 0000:00:10.0: get features successfully as expected 00:19:09.670 0000:00:10.0: read failed as expected 00:19:09.670 0000:00:10.0: read successfully as expected 00:19:09.670 Cleaning up... 00:19:09.670 00:19:09.670 real 0m0.549s 00:19:09.670 user 0m0.016s 00:19:09.670 sys 0m0.532s 00:19:09.670 10:23:15 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:09.670 10:23:15 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:09.670 ************************************ 00:19:09.670 END TEST nvme_err_injection 00:19:09.670 ************************************ 00:19:09.670 10:23:15 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:09.670 10:23:15 nvme -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:19:09.670 10:23:15 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:09.670 10:23:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.670 ************************************ 00:19:09.670 START TEST nvme_overhead 00:19:09.670 ************************************ 00:19:09.670 10:23:15 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:10.238 EAL: TSC is not safe to use in SMP mode 00:19:10.238 EAL: TSC is not invariant 00:19:10.238 [2024-06-10 10:23:15.588699] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:11.176 Initializing NVMe Controllers 00:19:11.176 Attaching to 0000:00:10.0 00:19:11.176 Attached to 0000:00:10.0 00:19:11.176 Initialization complete. Launching workers. 00:19:11.176 submit (in ns) avg, min, max = 10173.0, 8249.5, 104212.4 00:19:11.176 complete (in ns) avg, min, max = 7368.4, 6381.0, 131745.7 00:19:11.176 00:19:11.176 Submit histogram 00:19:11.176 ================ 00:19:11.176 Range in us Cumulative Count 00:19:11.176 8.229 - 8.290: 0.0088% ( 1) 00:19:11.176 8.350 - 8.411: 0.0176% ( 1) 00:19:11.176 8.533 - 8.594: 0.0265% ( 1) 00:19:11.176 8.594 - 8.655: 0.0353% ( 1) 00:19:11.176 8.838 - 8.899: 0.0441% ( 1) 00:19:11.176 8.899 - 8.960: 0.1147% ( 8) 00:19:11.176 8.960 - 9.021: 0.2205% ( 12) 00:19:11.176 9.021 - 9.082: 0.4675% ( 28) 00:19:11.176 9.082 - 9.143: 1.2877% ( 93) 00:19:11.176 9.143 - 9.204: 2.8224% ( 174) 00:19:11.176 9.204 - 9.265: 5.4419% ( 297) 00:19:11.176 9.265 - 9.326: 8.0173% ( 292) 00:19:11.176 9.326 - 9.387: 10.2487% ( 253) 00:19:11.176 9.387 - 9.448: 11.6687% ( 161) 00:19:11.176 9.448 - 9.509: 12.6301% ( 109) 00:19:11.176 9.509 - 9.570: 13.7767% ( 130) 00:19:11.176 9.570 - 9.630: 16.0169% ( 254) 00:19:11.176 9.630 - 9.691: 21.0002% ( 565) 00:19:11.176 9.691 - 9.752: 30.1993% ( 1043) 00:19:11.176 9.752 - 9.813: 40.6244% ( 1182) 00:19:11.176 9.813 - 9.874: 50.5116% ( 1121) 00:19:11.176 9.874 - 9.935: 57.9203% ( 840) 00:19:11.176 9.935 - 9.996: 64.6058% ( 758) 00:19:11.176 9.996 - 10.057: 71.5558% ( 788) 00:19:11.176 10.057 - 10.118: 76.7596% ( 590) 00:19:11.176 10.118 - 10.179: 81.1607% ( 499) 00:19:11.176 10.179 - 10.240: 84.0536% ( 328) 00:19:11.176 10.240 - 10.301: 86.0734% ( 229) 00:19:11.176 10.301 - 10.362: 87.4846% ( 160) 00:19:11.176 10.362 - 10.423: 88.3048% ( 93) 00:19:11.176 10.423 - 10.484: 88.9751% ( 76) 00:19:11.177 10.484 - 10.545: 89.5925% ( 70) 00:19:11.177 10.545 - 10.606: 90.1835% ( 67) 00:19:11.177 10.606 - 10.667: 90.8979% ( 81) 00:19:11.177 10.667 - 10.728: 91.5153% ( 70) 00:19:11.177 10.728 - 10.789: 92.1415% ( 71) 00:19:11.177 10.789 - 10.850: 92.9000% ( 86) 00:19:11.177 10.850 - 10.910: 93.5086% ( 69) 00:19:11.177 10.910 - 10.971: 94.0289% ( 59) 00:19:11.177 10.971 - 11.032: 94.5140% ( 55) 00:19:11.177 11.032 - 11.093: 94.9197% ( 46) 00:19:11.177 11.093 - 11.154: 95.2549% ( 38) 00:19:11.177 11.154 - 11.215: 95.5989% ( 39) 00:19:11.177 11.215 - 11.276: 95.8635% ( 30) 00:19:11.177 11.276 - 11.337: 96.1192% ( 29) 00:19:11.177 11.337 - 11.398: 96.2427% ( 14) 00:19:11.177 11.398 - 11.459: 96.3838% ( 16) 00:19:11.177 11.459 - 11.520: 96.5691% ( 21) 00:19:11.177 11.520 - 11.581: 96.6573% ( 10) 00:19:11.177 11.581 - 11.642: 96.7543% ( 11) 00:19:11.177 11.642 - 11.703: 96.8072% ( 6) 00:19:11.177 11.703 - 11.764: 96.8689% ( 7) 00:19:11.177 11.764 - 11.825: 96.9130% ( 5) 00:19:11.177 11.825 - 11.886: 96.9660% ( 6) 00:19:11.177 11.886 - 11.947: 97.0101% ( 5) 00:19:11.177 11.947 - 12.008: 97.0542% ( 5) 00:19:11.177 12.008 - 12.069: 97.0894% ( 4) 00:19:11.177 12.069 - 12.130: 97.1512% ( 7) 00:19:11.177 12.130 - 12.190: 97.1776% ( 3) 00:19:11.177 12.190 - 12.251: 97.2129% ( 4) 00:19:11.177 12.251 - 12.312: 97.2570% ( 5) 00:19:11.177 12.312 - 12.373: 97.2747% ( 2) 00:19:11.177 12.373 - 12.434: 97.3011% ( 3) 00:19:11.177 12.434 - 12.495: 97.3099% ( 1) 00:19:11.177 12.495 - 12.556: 97.3364% ( 3) 00:19:11.177 12.556 - 12.617: 97.4158% ( 9) 00:19:11.177 12.617 - 12.678: 97.4599% ( 5) 00:19:11.177 12.678 - 12.739: 97.4775% ( 2) 00:19:11.177 12.739 - 12.800: 97.5304% ( 6) 00:19:11.177 12.800 - 12.861: 97.5745% ( 5) 00:19:11.177 12.922 - 12.983: 97.6010% ( 3) 00:19:11.177 12.983 - 13.044: 97.6363% ( 4) 00:19:11.177 13.044 - 13.105: 97.6804% ( 5) 00:19:11.177 13.166 - 13.227: 97.7156% ( 4) 00:19:11.177 13.227 - 13.288: 97.7509% ( 4) 00:19:11.177 13.288 - 13.349: 97.7686% ( 2) 00:19:11.177 13.349 - 13.410: 97.7862% ( 2) 00:19:11.177 13.410 - 13.470: 97.8038% ( 2) 00:19:11.177 13.470 - 13.531: 97.8215% ( 2) 00:19:11.177 13.714 - 13.775: 97.8303% ( 1) 00:19:11.177 13.775 - 13.836: 97.8391% ( 1) 00:19:11.177 13.836 - 13.897: 97.8744% ( 4) 00:19:11.177 13.897 - 13.958: 97.9009% ( 3) 00:19:11.177 13.958 - 14.019: 97.9097% ( 1) 00:19:11.177 14.019 - 14.080: 97.9185% ( 1) 00:19:11.177 14.080 - 14.141: 97.9273% ( 1) 00:19:11.177 14.141 - 14.202: 97.9361% ( 1) 00:19:11.177 14.202 - 14.263: 97.9626% ( 3) 00:19:11.177 14.263 - 14.324: 97.9979% ( 4) 00:19:11.177 14.446 - 14.507: 98.0155% ( 2) 00:19:11.177 14.507 - 14.568: 98.0420% ( 3) 00:19:11.177 14.629 - 14.690: 98.0773% ( 4) 00:19:11.177 14.690 - 14.750: 98.0949% ( 2) 00:19:11.177 14.811 - 14.872: 98.1214% ( 3) 00:19:11.177 14.872 - 14.933: 98.1478% ( 3) 00:19:11.177 14.933 - 14.994: 98.1655% ( 2) 00:19:11.177 14.994 - 15.055: 98.2184% ( 6) 00:19:11.177 15.055 - 15.116: 98.2272% ( 1) 00:19:11.177 15.116 - 15.177: 98.2448% ( 2) 00:19:11.177 15.177 - 15.238: 98.2889% ( 5) 00:19:11.177 15.238 - 15.299: 98.3066% ( 2) 00:19:11.177 15.299 - 15.360: 98.3507% ( 5) 00:19:11.177 15.421 - 15.482: 98.3595% ( 1) 00:19:11.177 15.482 - 15.543: 98.3771% ( 2) 00:19:11.177 15.543 - 15.604: 98.3948% ( 2) 00:19:11.177 15.604 - 15.726: 98.4301% ( 4) 00:19:11.177 15.726 - 15.848: 98.4830% ( 6) 00:19:11.177 15.848 - 15.970: 98.5006% ( 2) 00:19:11.177 15.970 - 16.091: 98.5359% ( 4) 00:19:11.177 16.091 - 16.213: 98.5624% ( 3) 00:19:11.177 16.335 - 16.457: 98.5800% ( 2) 00:19:11.177 16.457 - 16.579: 98.6153% ( 4) 00:19:11.177 16.579 - 16.701: 98.6329% ( 2) 00:19:11.177 16.701 - 16.823: 98.6594% ( 3) 00:19:11.177 16.823 - 16.945: 98.6682% ( 1) 00:19:11.177 16.945 - 17.067: 98.6947% ( 3) 00:19:11.177 17.067 - 17.189: 98.7211% ( 3) 00:19:11.177 17.189 - 17.310: 98.7388% ( 2) 00:19:11.177 17.310 - 17.432: 98.7564% ( 2) 00:19:11.177 17.432 - 17.554: 98.7740% ( 2) 00:19:11.177 17.554 - 17.676: 98.8181% ( 5) 00:19:11.177 17.676 - 17.798: 98.8270% ( 1) 00:19:11.177 17.798 - 17.920: 98.8446% ( 2) 00:19:11.177 17.920 - 18.042: 98.8622% ( 2) 00:19:11.177 18.164 - 18.286: 98.8711% ( 1) 00:19:11.177 18.286 - 18.408: 98.8799% ( 1) 00:19:11.177 18.408 - 18.530: 98.8887% ( 1) 00:19:11.177 18.530 - 18.651: 98.8975% ( 1) 00:19:11.177 19.139 - 19.261: 98.9063% ( 1) 00:19:11.177 19.261 - 19.383: 98.9240% ( 2) 00:19:11.177 19.383 - 19.505: 98.9328% ( 1) 00:19:11.177 19.505 - 19.627: 98.9416% ( 1) 00:19:11.177 19.749 - 19.870: 98.9504% ( 1) 00:19:11.177 20.114 - 20.236: 98.9593% ( 1) 00:19:11.177 20.358 - 20.480: 98.9681% ( 1) 00:19:11.177 20.480 - 20.602: 99.0298% ( 7) 00:19:11.177 20.602 - 20.724: 99.0827% ( 6) 00:19:11.177 20.724 - 20.846: 99.1357% ( 6) 00:19:11.177 20.846 - 20.968: 99.1974% ( 7) 00:19:11.177 20.968 - 21.090: 99.2944% ( 11) 00:19:11.177 21.090 - 21.211: 99.3914% ( 11) 00:19:11.177 21.211 - 21.333: 99.4620% ( 8) 00:19:11.177 21.333 - 21.455: 99.5061% ( 5) 00:19:11.177 21.577 - 21.699: 99.5237% ( 2) 00:19:11.177 21.821 - 21.943: 99.5414% ( 2) 00:19:11.177 21.943 - 22.065: 99.5502% ( 1) 00:19:11.177 22.065 - 22.187: 99.5590% ( 1) 00:19:11.177 22.187 - 22.309: 99.5678% ( 1) 00:19:11.177 22.309 - 22.430: 99.5766% ( 1) 00:19:11.177 22.552 - 22.674: 99.5855% ( 1) 00:19:11.177 22.796 - 22.918: 99.5943% ( 1) 00:19:11.177 23.771 - 23.893: 99.6031% ( 1) 00:19:11.177 23.893 - 24.015: 99.6119% ( 1) 00:19:11.177 24.137 - 24.259: 99.6207% ( 1) 00:19:11.177 25.478 - 25.600: 99.6296% ( 1) 00:19:11.177 25.722 - 25.844: 99.6384% ( 1) 00:19:11.177 25.844 - 25.966: 99.6560% ( 2) 00:19:11.177 25.966 - 26.088: 99.6648% ( 1) 00:19:11.177 26.088 - 26.210: 99.7089% ( 5) 00:19:11.177 26.210 - 26.331: 99.7266% ( 2) 00:19:11.177 26.331 - 26.453: 99.7530% ( 3) 00:19:11.177 26.453 - 26.575: 99.7883% ( 4) 00:19:11.177 26.575 - 26.697: 99.8060% ( 2) 00:19:11.177 26.697 - 26.819: 99.8148% ( 1) 00:19:11.177 26.819 - 26.941: 99.8412% ( 3) 00:19:11.177 26.941 - 27.063: 99.8501% ( 1) 00:19:11.177 27.063 - 27.185: 99.8677% ( 2) 00:19:11.177 27.550 - 27.672: 99.8853% ( 2) 00:19:11.177 27.794 - 27.916: 99.8942% ( 1) 00:19:11.177 28.282 - 28.404: 99.9030% ( 1) 00:19:11.177 29.623 - 29.745: 99.9118% ( 1) 00:19:11.177 31.451 - 31.695: 99.9206% ( 1) 00:19:11.177 31.695 - 31.939: 99.9294% ( 1) 00:19:11.177 34.865 - 35.109: 99.9383% ( 1) 00:19:11.177 36.571 - 36.815: 99.9471% ( 1) 00:19:11.177 64.366 - 64.853: 99.9559% ( 1) 00:19:11.177 76.556 - 77.044: 99.9647% ( 1) 00:19:11.177 79.482 - 79.970: 99.9735% ( 1) 00:19:11.177 80.945 - 81.432: 99.9824% ( 1) 00:19:11.177 89.722 - 90.210: 99.9912% ( 1) 00:19:11.177 103.863 - 104.350: 100.0000% ( 1) 00:19:11.177 00:19:11.177 Complete histogram 00:19:11.177 ================== 00:19:11.177 Range in us Cumulative Count 00:19:11.177 6.370 - 6.400: 0.0617% ( 7) 00:19:11.177 6.400 - 6.430: 0.3352% ( 31) 00:19:11.177 6.430 - 6.461: 1.1730% ( 95) 00:19:11.177 6.461 - 6.491: 2.3726% ( 136) 00:19:11.177 6.491 - 6.522: 3.6426% ( 144) 00:19:11.177 6.522 - 6.552: 4.9127% ( 144) 00:19:11.177 6.552 - 6.583: 6.1122% ( 136) 00:19:11.177 6.583 - 6.613: 7.0824% ( 110) 00:19:11.178 6.613 - 6.644: 7.7703% ( 78) 00:19:11.178 6.644 - 6.674: 8.2907% ( 59) 00:19:11.178 6.674 - 6.705: 8.7758% ( 55) 00:19:11.178 6.705 - 6.735: 9.1462% ( 42) 00:19:11.178 6.735 - 6.766: 9.4549% ( 35) 00:19:11.178 6.766 - 6.796: 9.8959% ( 50) 00:19:11.178 6.796 - 6.827: 10.9720% ( 122) 00:19:11.178 6.827 - 6.857: 13.8913% ( 331) 00:19:11.178 6.857 - 6.888: 20.4181% ( 740) 00:19:11.178 6.888 - 6.918: 28.1002% ( 871) 00:19:11.178 6.918 - 6.949: 35.2267% ( 808) 00:19:11.178 6.949 - 6.979: 40.6685% ( 617) 00:19:11.178 6.979 - 7.010: 45.4401% ( 541) 00:19:11.178 7.010 - 7.040: 49.0739% ( 412) 00:19:11.178 7.040 - 7.070: 53.1222% ( 459) 00:19:11.178 7.070 - 7.101: 57.5851% ( 506) 00:19:11.178 7.101 - 7.131: 62.7271% ( 583) 00:19:11.178 7.131 - 7.162: 66.9607% ( 480) 00:19:11.178 7.162 - 7.192: 70.1446% ( 361) 00:19:11.178 7.192 - 7.223: 72.9847% ( 322) 00:19:11.178 7.223 - 7.253: 75.3837% ( 272) 00:19:11.178 7.253 - 7.284: 77.4475% ( 234) 00:19:11.178 7.284 - 7.314: 79.4232% ( 224) 00:19:11.178 7.314 - 7.345: 80.6227% ( 136) 00:19:11.178 7.345 - 7.375: 81.8310% ( 137) 00:19:11.178 7.375 - 7.406: 83.0305% ( 136) 00:19:11.178 7.406 - 7.436: 84.0889% ( 120) 00:19:11.178 7.436 - 7.467: 85.0944% ( 114) 00:19:11.178 7.467 - 7.497: 86.1351% ( 118) 00:19:11.178 7.497 - 7.528: 87.1141% ( 111) 00:19:11.178 7.528 - 7.558: 87.9344% ( 93) 00:19:11.178 7.558 - 7.589: 88.7017% ( 87) 00:19:11.178 7.589 - 7.619: 89.4514% ( 85) 00:19:11.178 7.619 - 7.650: 90.2364% ( 89) 00:19:11.178 7.650 - 7.680: 91.0478% ( 92) 00:19:11.178 7.680 - 7.710: 91.5682% ( 59) 00:19:11.178 7.710 - 7.741: 92.0974% ( 60) 00:19:11.178 7.741 - 7.771: 92.5031% ( 46) 00:19:11.178 7.771 - 7.802: 92.8294% ( 37) 00:19:11.178 7.802 - 7.863: 93.5086% ( 77) 00:19:11.178 7.863 - 7.924: 93.9760% ( 53) 00:19:11.178 7.924 - 7.985: 94.5052% ( 60) 00:19:11.178 7.985 - 8.046: 94.9021% ( 45) 00:19:11.178 8.046 - 8.107: 95.1843% ( 32) 00:19:11.178 8.107 - 8.168: 95.4754% ( 33) 00:19:11.178 8.168 - 8.229: 95.6342% ( 18) 00:19:11.178 8.229 - 8.290: 95.7841% ( 17) 00:19:11.178 8.290 - 8.350: 95.8811% ( 11) 00:19:11.178 8.350 - 8.411: 95.9693% ( 10) 00:19:11.178 8.411 - 8.472: 96.0663% ( 11) 00:19:11.178 8.472 - 8.533: 96.1104% ( 5) 00:19:11.178 8.533 - 8.594: 96.1810% ( 8) 00:19:11.178 8.594 - 8.655: 96.2780% ( 11) 00:19:11.178 8.655 - 8.716: 96.3750% ( 11) 00:19:11.178 8.716 - 8.777: 96.4368% ( 7) 00:19:11.178 8.777 - 8.838: 96.4720% ( 4) 00:19:11.178 8.838 - 8.899: 96.5514% ( 9) 00:19:11.178 8.899 - 8.960: 96.5779% ( 3) 00:19:11.178 8.960 - 9.021: 96.6308% ( 6) 00:19:11.178 9.021 - 9.082: 96.6925% ( 7) 00:19:11.178 9.082 - 9.143: 96.7190% ( 3) 00:19:11.178 9.143 - 9.204: 96.7631% ( 5) 00:19:11.178 9.204 - 9.265: 96.7896% ( 3) 00:19:11.178 9.265 - 9.326: 96.8160% ( 3) 00:19:11.178 9.326 - 9.387: 96.8689% ( 6) 00:19:11.178 9.387 - 9.448: 96.8866% ( 2) 00:19:11.178 9.448 - 9.509: 96.9483% ( 7) 00:19:11.178 9.509 - 9.570: 97.0101% ( 7) 00:19:11.178 9.570 - 9.630: 97.0630% ( 6) 00:19:11.178 9.630 - 9.691: 97.1247% ( 7) 00:19:11.178 9.691 - 9.752: 97.1600% ( 4) 00:19:11.178 9.752 - 9.813: 97.2041% ( 5) 00:19:11.178 9.813 - 9.874: 97.2217% ( 2) 00:19:11.178 9.874 - 9.935: 97.2306% ( 1) 00:19:11.178 9.935 - 9.996: 97.2658% ( 4) 00:19:11.178 9.996 - 10.057: 97.3364% ( 8) 00:19:11.178 10.057 - 10.118: 97.3629% ( 3) 00:19:11.178 10.118 - 10.179: 97.3805% ( 2) 00:19:11.178 10.179 - 10.240: 97.3893% ( 1) 00:19:11.178 10.240 - 10.301: 97.4246% ( 4) 00:19:11.178 10.301 - 10.362: 97.4334% ( 1) 00:19:11.178 10.362 - 10.423: 97.4510% ( 2) 00:19:11.178 10.423 - 10.484: 97.4599% ( 1) 00:19:11.178 10.484 - 10.545: 97.4951% ( 4) 00:19:11.178 10.545 - 10.606: 97.5040% ( 1) 00:19:11.178 10.667 - 10.728: 97.5128% ( 1) 00:19:11.178 10.789 - 10.850: 97.5304% ( 2) 00:19:11.178 10.850 - 10.910: 97.5392% ( 1) 00:19:11.178 10.910 - 10.971: 97.5481% ( 1) 00:19:11.178 10.971 - 11.032: 97.5657% ( 2) 00:19:11.178 11.032 - 11.093: 97.5833% ( 2) 00:19:11.178 11.154 - 11.215: 97.6098% ( 3) 00:19:11.178 11.215 - 11.276: 97.6274% ( 2) 00:19:11.178 11.276 - 11.337: 97.6363% ( 1) 00:19:11.178 11.337 - 11.398: 97.6539% ( 2) 00:19:11.178 11.459 - 11.520: 97.6627% ( 1) 00:19:11.178 11.520 - 11.581: 97.6715% ( 1) 00:19:11.178 11.581 - 11.642: 97.6892% ( 2) 00:19:11.178 11.642 - 11.703: 97.7068% ( 2) 00:19:11.178 11.764 - 11.825: 97.7156% ( 1) 00:19:11.178 11.825 - 11.886: 97.7333% ( 2) 00:19:11.178 11.886 - 11.947: 97.7421% ( 1) 00:19:11.178 11.947 - 12.008: 97.7597% ( 2) 00:19:11.178 12.008 - 12.069: 97.7686% ( 1) 00:19:11.178 12.069 - 12.130: 97.7950% ( 3) 00:19:11.178 12.130 - 12.190: 97.8127% ( 2) 00:19:11.178 12.190 - 12.251: 97.8391% ( 3) 00:19:11.178 12.251 - 12.312: 97.8568% ( 2) 00:19:11.178 12.373 - 12.434: 97.8656% ( 1) 00:19:11.178 12.434 - 12.495: 97.8744% ( 1) 00:19:11.178 12.495 - 12.556: 97.8920% ( 2) 00:19:11.178 12.556 - 12.617: 97.9097% ( 2) 00:19:11.178 12.617 - 12.678: 97.9185% ( 1) 00:19:11.178 12.678 - 12.739: 97.9273% ( 1) 00:19:11.178 12.739 - 12.800: 97.9626% ( 4) 00:19:11.178 12.800 - 12.861: 97.9714% ( 1) 00:19:11.178 12.861 - 12.922: 97.9802% ( 1) 00:19:11.178 12.922 - 12.983: 98.0067% ( 3) 00:19:11.178 12.983 - 13.044: 98.0155% ( 1) 00:19:11.178 13.044 - 13.105: 98.0243% ( 1) 00:19:11.178 13.105 - 13.166: 98.0332% ( 1) 00:19:11.178 13.166 - 13.227: 98.0773% ( 5) 00:19:11.178 13.227 - 13.288: 98.1037% ( 3) 00:19:11.178 13.288 - 13.349: 98.1390% ( 4) 00:19:11.178 13.410 - 13.470: 98.1566% ( 2) 00:19:11.178 13.470 - 13.531: 98.1655% ( 1) 00:19:11.178 13.531 - 13.592: 98.2007% ( 4) 00:19:11.178 13.653 - 13.714: 98.2272% ( 3) 00:19:11.178 13.714 - 13.775: 98.2360% ( 1) 00:19:11.178 13.775 - 13.836: 98.2448% ( 1) 00:19:11.178 13.836 - 13.897: 98.2537% ( 1) 00:19:11.178 13.897 - 13.958: 98.2801% ( 3) 00:19:11.178 13.958 - 14.019: 98.2978% ( 2) 00:19:11.178 14.019 - 14.080: 98.3066% ( 1) 00:19:11.178 14.080 - 14.141: 98.3242% ( 2) 00:19:11.178 14.141 - 14.202: 98.3330% ( 1) 00:19:11.178 14.263 - 14.324: 98.3419% ( 1) 00:19:11.178 14.385 - 14.446: 98.3683% ( 3) 00:19:11.178 14.446 - 14.507: 98.3771% ( 1) 00:19:11.178 14.507 - 14.568: 98.4036% ( 3) 00:19:11.178 14.568 - 14.629: 98.4124% ( 1) 00:19:11.178 14.629 - 14.690: 98.4212% ( 1) 00:19:11.178 14.750 - 14.811: 98.4477% ( 3) 00:19:11.178 14.811 - 14.872: 98.4653% ( 2) 00:19:11.178 15.299 - 15.360: 98.4830% ( 2) 00:19:11.178 15.360 - 15.421: 98.5094% ( 3) 00:19:11.178 15.482 - 15.543: 98.5359% ( 3) 00:19:11.178 15.848 - 15.970: 98.5535% ( 2) 00:19:11.178 15.970 - 16.091: 98.5624% ( 1) 00:19:11.178 16.213 - 16.335: 98.5800% ( 2) 00:19:11.178 16.335 - 16.457: 98.5976% ( 2) 00:19:11.178 16.457 - 16.579: 98.6065% ( 1) 00:19:11.178 16.823 - 16.945: 98.6241% ( 2) 00:19:11.178 17.067 - 17.189: 98.6594% ( 4) 00:19:11.178 17.554 - 17.676: 98.6682% ( 1) 00:19:11.178 17.676 - 17.798: 98.7652% ( 11) 00:19:11.178 17.798 - 17.920: 98.8711% ( 12) 00:19:11.178 17.920 - 18.042: 98.9945% ( 14) 00:19:11.178 18.042 - 18.164: 99.1180% ( 14) 00:19:11.178 18.164 - 18.286: 99.2150% ( 11) 00:19:11.178 18.286 - 18.408: 99.3650% ( 17) 00:19:11.178 18.408 - 18.530: 99.4091% ( 5) 00:19:11.178 18.530 - 18.651: 99.4796% ( 8) 00:19:11.178 18.651 - 18.773: 99.4884% ( 1) 00:19:11.178 18.773 - 18.895: 99.5237% ( 4) 00:19:11.178 18.895 - 19.017: 99.5325% ( 1) 00:19:11.178 19.383 - 19.505: 99.5678% ( 4) 00:19:11.178 19.749 - 19.870: 99.5766% ( 1) 00:19:11.178 19.992 - 20.114: 99.5855% ( 1) 00:19:11.178 20.358 - 20.480: 99.5943% ( 1) 00:19:11.178 20.846 - 20.968: 99.6031% ( 1) 00:19:11.178 20.968 - 21.090: 99.6119% ( 1) 00:19:11.178 21.943 - 22.065: 99.6207% ( 1) 00:19:11.178 22.552 - 22.674: 99.6296% ( 1) 00:19:11.178 22.796 - 22.918: 99.6384% ( 1) 00:19:11.178 22.918 - 23.040: 99.6560% ( 2) 00:19:11.178 23.040 - 23.162: 99.6737% ( 2) 00:19:11.178 23.162 - 23.284: 99.6825% ( 1) 00:19:11.178 23.284 - 23.406: 99.7178% ( 4) 00:19:11.178 23.406 - 23.528: 99.7619% ( 5) 00:19:11.178 23.528 - 23.650: 99.8324% ( 8) 00:19:11.178 23.650 - 23.771: 99.8589% ( 3) 00:19:11.178 23.771 - 23.893: 99.9206% ( 7) 00:19:11.178 23.893 - 24.015: 99.9383% ( 2) 00:19:11.178 24.259 - 24.381: 99.9471% ( 1) 00:19:11.179 25.844 - 25.966: 99.9559% ( 1) 00:19:11.179 26.941 - 27.063: 99.9647% ( 1) 00:19:11.179 28.282 - 28.404: 99.9735% ( 1) 00:19:11.179 30.720 - 30.842: 99.9824% ( 1) 00:19:11.179 38.522 - 38.766: 99.9912% ( 1) 00:19:11.179 131.657 - 132.632: 100.0000% ( 1) 00:19:11.179 00:19:11.179 00:19:11.179 real 0m1.539s 00:19:11.179 user 0m1.013s 00:19:11.179 sys 0m0.525s 00:19:11.179 10:23:16 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:11.179 ************************************ 00:19:11.179 10:23:16 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:11.179 END TEST nvme_overhead 00:19:11.179 ************************************ 00:19:11.179 10:23:16 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:11.179 10:23:16 nvme -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:19:11.179 10:23:16 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:11.179 10:23:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.179 ************************************ 00:19:11.179 START TEST nvme_arbitration 00:19:11.179 ************************************ 00:19:11.179 10:23:16 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:12.200 EAL: TSC is not safe to use in SMP mode 00:19:12.200 EAL: TSC is not invariant 00:19:12.200 [2024-06-10 10:23:17.433543] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:15.520 Initializing NVMe Controllers 00:19:15.520 Attaching to 0000:00:10.0 00:19:15.520 Attached to 0000:00:10.0 00:19:15.520 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:15.520 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:19:15.520 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:19:15.520 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:19:15.520 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:15.520 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:15.520 Initialization complete. Launching workers. 00:19:15.520 Starting thread on core 1 with urgent priority queue 00:19:15.520 Starting thread on core 2 with urgent priority queue 00:19:15.520 Starting thread on core 3 with urgent priority queue 00:19:15.520 Starting thread on core 0 with urgent priority queue 00:19:15.520 QEMU NVMe Ctrl (12340 ) core 0: 5891.00 IO/s 16.98 secs/100000 ios 00:19:15.520 QEMU NVMe Ctrl (12340 ) core 1: 5829.67 IO/s 17.15 secs/100000 ios 00:19:15.520 QEMU NVMe Ctrl (12340 ) core 2: 5907.33 IO/s 16.93 secs/100000 ios 00:19:15.520 QEMU NVMe Ctrl (12340 ) core 3: 5884.33 IO/s 16.99 secs/100000 ios 00:19:15.520 ======================================================== 00:19:15.520 00:19:15.520 00:19:15.520 real 0m4.424s 00:19:15.520 user 0m12.641s 00:19:15.520 sys 0m0.819s 00:19:15.520 10:23:21 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:15.520 10:23:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:15.520 ************************************ 00:19:15.520 END TEST nvme_arbitration 00:19:15.520 ************************************ 00:19:15.779 10:23:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:15.779 10:23:21 nvme -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:19:15.779 10:23:21 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:15.779 10:23:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.779 ************************************ 00:19:15.779 START TEST nvme_single_aen 00:19:15.779 ************************************ 00:19:15.779 10:23:21 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:16.037 EAL: TSC is not safe to use in SMP mode 00:19:16.037 EAL: TSC is not invariant 00:19:16.037 [2024-06-10 10:23:21.596614] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:16.322 Asynchronous Event Request test 00:19:16.322 Attaching to 0000:00:10.0 00:19:16.322 Attached to 0000:00:10.0 00:19:16.322 Reset controller to setup AER completions for this process 00:19:16.322 Registering asynchronous event callbacks... 00:19:16.322 Getting orig temperature thresholds of all controllers 00:19:16.322 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:16.322 Setting all controllers temperature threshold low to trigger AER 00:19:16.322 Waiting for all controllers temperature threshold to be set lower 00:19:16.322 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:16.322 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:16.322 Waiting for all controllers to trigger AER and reset threshold 00:19:16.322 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:16.322 Cleaning up... 00:19:16.322 00:19:16.322 real 0m0.513s 00:19:16.322 user 0m0.010s 00:19:16.322 sys 0m0.503s 00:19:16.322 10:23:21 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:16.322 10:23:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:16.322 ************************************ 00:19:16.322 END TEST nvme_single_aen 00:19:16.322 ************************************ 00:19:16.322 10:23:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:16.322 10:23:21 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:16.322 10:23:21 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:16.322 10:23:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.322 ************************************ 00:19:16.322 START TEST nvme_doorbell_aers 00:19:16.322 ************************************ 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # nvme_doorbell_aers 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1512 -- # local bdfs 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:16.322 10:23:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:16.890 EAL: TSC is not safe to use in SMP mode 00:19:16.890 EAL: TSC is not invariant 00:19:16.890 [2024-06-10 10:23:22.214666] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:16.890 Executing: test_write_invalid_db 00:19:16.890 Waiting for AER completion... 00:19:16.890 Asynchronous Event received. 00:19:16.890 Error Informaton Log Page received. 00:19:16.890 Success: test_write_invalid_db 00:19:16.890 00:19:16.890 Executing: test_invalid_db_write_overflow_sq 00:19:16.890 Waiting for AER completion... 00:19:16.890 Asynchronous Event received. 00:19:16.890 Error Informaton Log Page received. 00:19:16.890 Success: test_invalid_db_write_overflow_sq 00:19:16.890 00:19:16.890 Executing: test_invalid_db_write_overflow_cq 00:19:16.890 Waiting for AER completion... 00:19:16.890 Asynchronous Event received. 00:19:16.890 Error Informaton Log Page received. 00:19:16.890 Success: test_invalid_db_write_overflow_cq 00:19:16.890 00:19:16.890 00:19:16.890 real 0m0.567s 00:19:16.890 user 0m0.042s 00:19:16.890 sys 0m0.537s 00:19:16.890 10:23:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:16.890 10:23:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:16.890 ************************************ 00:19:16.890 END TEST nvme_doorbell_aers 00:19:16.890 ************************************ 00:19:16.890 10:23:22 nvme -- nvme/nvme.sh@97 -- # uname 00:19:16.890 10:23:22 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:19:16.890 10:23:22 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:16.890 10:23:22 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:16.890 10:23:22 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:16.890 10:23:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.890 ************************************ 00:19:16.890 START TEST bdev_nvme_reset_stuck_adm_cmd 00:19:16.890 ************************************ 00:19:16.890 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:17.149 * Looking for test storage... 00:19:17.149 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # bdfs=() 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # local bdfs 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # local bdfs 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1526 -- # echo 0000:00:10.0 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69802 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69802 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@830 -- # '[' -z 69802 ']' 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.149 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:17.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.150 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.150 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:17.150 10:23:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:17.150 [2024-06-10 10:23:22.588416] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:19:17.150 [2024-06-10 10:23:22.588590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:17.717 EAL: TSC is not safe to use in SMP mode 00:19:17.717 EAL: TSC is not invariant 00:19:17.717 [2024-06-10 10:23:23.071857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.717 [2024-06-10 10:23:23.165005] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:17.717 [2024-06-10 10:23:23.165074] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:17.717 [2024-06-10 10:23:23.165087] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:19:17.717 [2024-06-10 10:23:23.165097] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:19:17.717 [2024-06-10 10:23:23.170141] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.717 [2024-06-10 10:23:23.169938] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.717 [2024-06-10 10:23:23.170056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.717 [2024-06-10 10:23:23.170134] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.976 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:17.976 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@863 -- # return 0 00:19:17.976 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:19:17.976 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.976 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:18.275 [2024-06-10 10:23:23.587741] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:18.275 nvme0n1 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:18.275 true 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1718015003 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69814 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:18.275 10:23:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:20.805 [2024-06-10 10:23:25.798570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:20.805 [2024-06-10 10:23:25.798724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:20.805 [2024-06-10 10:23:25.798738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:20.805 [2024-06-10 10:23:25.798747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:20.805 [2024-06-10 10:23:25.799886] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.805 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69814 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69814 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69814 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.yurzMb 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.fosrmS 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69802 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@949 -- # '[' -z 69802 ']' 00:19:20.805 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # kill -0 69802 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # uname 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # ps -c -o command 69802 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # tail -1 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:19:20.806 killing process with pid 69802 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # echo 'killing process with pid 69802' 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # kill 69802 00:19:20.806 10:23:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # wait 69802 00:19:20.806 10:23:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:19:20.806 10:23:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:19:20.806 00:19:20.806 real 0m3.814s 00:19:20.806 user 0m12.437s 00:19:20.806 sys 0m0.825s 00:19:20.806 10:23:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:20.806 10:23:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:20.806 ************************************ 00:19:20.806 END TEST bdev_nvme_reset_stuck_adm_cmd 00:19:20.806 ************************************ 00:19:20.806 10:23:26 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:19:20.806 10:23:26 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:19:20.806 10:23:26 nvme -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:20.806 10:23:26 nvme -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:20.806 10:23:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.806 ************************************ 00:19:20.806 START TEST nvme_fio 00:19:20.806 ************************************ 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # nvme_fio_test 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1512 -- # local bdfs 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:20.806 10:23:26 nvme.nvme_fio -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:20.806 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:19:21.373 EAL: TSC is not safe to use in SMP mode 00:19:21.373 EAL: TSC is not invariant 00:19:21.373 [2024-06-10 10:23:26.684918] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:21.373 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:21.373 10:23:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:19:21.632 EAL: TSC is not safe to use in SMP mode 00:19:21.632 EAL: TSC is not invariant 00:19:21.632 [2024-06-10 10:23:27.211453] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:21.919 10:23:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:19:21.919 10:23:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1359 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1338 -- # local sanitizers 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # shift 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local asan_lib= 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libasan 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib= 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # asan_lib= 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:21.919 10:23:27 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:21.919 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:21.919 fio-3.35 00:19:21.919 Starting 1 thread 00:19:22.484 EAL: TSC is not safe to use in SMP mode 00:19:22.484 EAL: TSC is not invariant 00:19:22.484 [2024-06-10 10:23:27.842056] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:25.021 00:19:25.021 test: (groupid=0, jobs=1): err= 0: pid=102878: Mon Jun 10 10:23:30 2024 00:19:25.021 read: IOPS=44.2k, BW=173MiB/s (181MB/s)(345MiB/2001msec) 00:19:25.021 slat (nsec): min=455, max=27792, avg=701.62, stdev=405.33 00:19:25.021 clat (usec): min=274, max=6711, avg=1449.72, stdev=396.06 00:19:25.021 lat (usec): min=275, max=6711, avg=1450.42, stdev=396.11 00:19:25.021 clat percentiles (usec): 00:19:25.021 | 1.00th=[ 725], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1188], 00:19:25.021 | 30.00th=[ 1254], 40.00th=[ 1336], 50.00th=[ 1385], 60.00th=[ 1450], 00:19:25.021 | 70.00th=[ 1516], 80.00th=[ 1631], 90.00th=[ 1827], 95.00th=[ 2089], 00:19:25.021 | 99.00th=[ 2966], 99.50th=[ 3392], 99.90th=[ 4621], 99.95th=[ 5145], 00:19:25.021 | 99.99th=[ 6063] 00:19:25.021 bw ( KiB/s): min=160291, max=196315, per=100.00%, avg=178546.67, stdev=18016.94, samples=3 00:19:25.021 iops : min=40072, max=49078, avg=44636.00, stdev=4504.24, samples=3 00:19:25.021 write: IOPS=44.1k, BW=172MiB/s (180MB/s)(344MiB/2001msec); 0 zone resets 00:19:25.021 slat (nsec): min=501, max=29110, avg=988.62, stdev=509.90 00:19:25.021 clat (usec): min=268, max=6610, avg=1447.54, stdev=395.50 00:19:25.021 lat (usec): min=270, max=6611, avg=1448.53, stdev=395.55 00:19:25.021 clat percentiles (usec): 00:19:25.021 | 1.00th=[ 725], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1188], 00:19:25.021 | 30.00th=[ 1254], 40.00th=[ 1319], 50.00th=[ 1385], 60.00th=[ 1450], 00:19:25.021 | 70.00th=[ 1516], 80.00th=[ 1631], 90.00th=[ 1811], 95.00th=[ 2073], 00:19:25.021 | 99.00th=[ 2966], 99.50th=[ 3425], 99.90th=[ 4621], 99.95th=[ 4948], 00:19:25.021 | 99.99th=[ 6063] 00:19:25.021 bw ( KiB/s): min=160583, max=193942, per=100.00%, avg=177575.00, stdev=16688.28, samples=3 00:19:25.021 iops : min=40145, max=48485, avg=44393.33, stdev=4172.21, samples=3 00:19:25.021 lat (usec) : 500=0.13%, 750=1.03%, 1000=2.38% 00:19:25.021 lat (msec) : 2=90.53%, 4=5.71%, 10=0.23% 00:19:25.021 cpu : usr=100.00%, sys=0.00%, ctx=24, majf=0, minf=2 00:19:25.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:25.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.021 issued rwts: total=88396,88147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.021 00:19:25.021 Run status group 0 (all jobs): 00:19:25.021 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=345MiB (362MB), run=2001-2001msec 00:19:25.021 WRITE: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2001-2001msec 00:19:25.956 10:23:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:19:25.956 10:23:31 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:19:25.956 00:19:25.956 real 0m5.277s 00:19:25.956 user 0m2.784s 00:19:25.956 sys 0m2.410s 00:19:25.956 10:23:31 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:25.956 10:23:31 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:19:25.956 ************************************ 00:19:25.956 END TEST nvme_fio 00:19:25.956 ************************************ 00:19:25.956 00:19:25.956 real 0m25.660s 00:19:25.956 user 0m31.930s 00:19:25.956 sys 0m12.124s 00:19:25.956 10:23:31 nvme -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:25.956 10:23:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.956 ************************************ 00:19:25.956 END TEST nvme 00:19:25.956 ************************************ 00:19:25.956 10:23:31 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:19:25.956 10:23:31 -- spdk/autotest.sh@221 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:25.956 10:23:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:25.956 10:23:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:25.956 10:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:25.956 ************************************ 00:19:25.956 START TEST nvme_scc 00:19:25.956 ************************************ 00:19:25.956 10:23:31 nvme_scc -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:26.215 * Looking for test storage... 00:19:26.215 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:26.215 10:23:31 nvme_scc -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:26.215 10:23:31 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.215 10:23:31 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.215 10:23:31 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.215 10:23:31 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:26.215 10:23:31 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:26.215 10:23:31 nvme_scc -- paths/export.sh@4 -- # export PATH 00:19:26.215 10:23:31 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:26.215 10:23:31 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:19:26.215 10:23:31 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.215 10:23:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:19:26.215 10:23:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:19:26.215 10:23:31 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:19:26.215 00:19:26.215 real 0m0.166s 00:19:26.215 user 0m0.129s 00:19:26.215 sys 0m0.111s 00:19:26.215 10:23:31 nvme_scc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:26.215 10:23:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:26.215 ************************************ 00:19:26.215 END TEST nvme_scc 00:19:26.215 ************************************ 00:19:26.215 10:23:31 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:19:26.215 10:23:31 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:19:26.215 10:23:31 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:19:26.215 10:23:31 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:19:26.215 10:23:31 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:19:26.215 10:23:31 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:26.215 10:23:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:26.215 10:23:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:26.215 10:23:31 -- common/autotest_common.sh@10 -- # set +x 00:19:26.215 ************************************ 00:19:26.215 START TEST nvme_rpc 00:19:26.215 ************************************ 00:19:26.215 10:23:31 nvme_rpc -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:26.476 * Looking for test storage... 00:19:26.476 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1523 -- # bdfs=() 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1523 -- # local bdfs 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1512 -- # bdfs=() 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1512 -- # local bdfs 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1513 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:00:10.0 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@1526 -- # echo 0000:00:10.0 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=70056 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 70056 00:19:26.476 10:23:31 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@830 -- # '[' -z 70056 ']' 00:19:26.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:26.476 10:23:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:26.476 [2024-06-10 10:23:31.947687] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:19:26.476 [2024-06-10 10:23:31.947886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:27.045 EAL: TSC is not safe to use in SMP mode 00:19:27.045 EAL: TSC is not invariant 00:19:27.045 [2024-06-10 10:23:32.440190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:27.045 [2024-06-10 10:23:32.533745] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:27.045 [2024-06-10 10:23:32.533799] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:27.045 [2024-06-10 10:23:32.537155] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.045 [2024-06-10 10:23:32.537146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.622 10:23:32 nvme_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:27.622 10:23:32 nvme_rpc -- common/autotest_common.sh@863 -- # return 0 00:19:27.622 10:23:32 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:27.622 [2024-06-10 10:23:33.222273] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:27.882 Nvme0n1 00:19:27.882 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:19:27.882 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:19:28.141 request: 00:19:28.141 { 00:19:28.141 "filename": "non_existing_file", 00:19:28.141 "bdev_name": "Nvme0n1", 00:19:28.141 "method": "bdev_nvme_apply_firmware", 00:19:28.141 "req_id": 1 00:19:28.141 } 00:19:28.141 Got JSON-RPC error response 00:19:28.141 response: 00:19:28.141 { 00:19:28.141 "code": -32603, 00:19:28.141 "message": "open file failed." 00:19:28.141 } 00:19:28.141 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:19:28.141 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:19:28.141 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:28.399 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:28.399 10:23:33 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 70056 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@949 -- # '[' -z 70056 ']' 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@953 -- # kill -0 70056 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@954 -- # uname 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@957 -- # ps -c -o command 70056 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@957 -- # tail -1 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 70056' 00:19:28.399 killing process with pid 70056 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@968 -- # kill 70056 00:19:28.399 10:23:33 nvme_rpc -- common/autotest_common.sh@973 -- # wait 70056 00:19:28.657 00:19:28.657 real 0m2.336s 00:19:28.657 user 0m4.301s 00:19:28.657 sys 0m0.804s 00:19:28.657 10:23:34 nvme_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:28.657 ************************************ 00:19:28.657 END TEST nvme_rpc 00:19:28.657 ************************************ 00:19:28.657 10:23:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:28.657 10:23:34 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:28.657 10:23:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:19:28.657 10:23:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:19:28.657 10:23:34 -- common/autotest_common.sh@10 -- # set +x 00:19:28.657 ************************************ 00:19:28.657 START TEST nvme_rpc_timeouts 00:19:28.657 ************************************ 00:19:28.657 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:28.914 * Looking for test storage... 00:19:28.914 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_70093 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_70093 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=70121 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:28.914 10:23:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 70121 00:19:28.914 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@830 -- # '[' -z 70121 ']' 00:19:28.914 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.915 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local max_retries=100 00:19:28.915 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.915 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # xtrace_disable 00:19:28.915 10:23:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:28.915 [2024-06-10 10:23:34.281355] Starting SPDK v24.09-pre git sha1 c5e2a446d / DPDK 24.03.0 initialization... 00:19:28.915 [2024-06-10 10:23:34.281532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:19:29.172 EAL: TSC is not safe to use in SMP mode 00:19:29.172 EAL: TSC is not invariant 00:19:29.172 [2024-06-10 10:23:34.735942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:29.431 [2024-06-10 10:23:34.829439] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:29.431 [2024-06-10 10:23:34.829507] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:29.431 [2024-06-10 10:23:34.832987] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.431 [2024-06-10 10:23:34.832978] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.029 10:23:35 nvme_rpc_timeouts -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:19:30.030 10:23:35 nvme_rpc_timeouts -- common/autotest_common.sh@863 -- # return 0 00:19:30.030 10:23:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:19:30.030 Checking default timeout settings: 00:19:30.030 10:23:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:30.289 Making settings changes with rpc: 00:19:30.289 10:23:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:19:30.289 10:23:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:19:30.289 Check default vs. modified settings: 00:19:30.289 10:23:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:19:30.289 10:23:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:19:30.855 Setting action_on_timeout is changed as expected. 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:19:30.855 Setting timeout_us is changed as expected. 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:19:30.855 Setting timeout_admin_us is changed as expected. 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_70093 /tmp/settings_modified_70093 00:19:30.855 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 70121 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@949 -- # '[' -z 70121 ']' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # kill -0 70121 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # uname 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' FreeBSD = Linux ']' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # ps -c -o command 70121 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # tail -1 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # process_name=spdk_tgt 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' spdk_tgt = sudo ']' 00:19:30.855 killing process with pid 70121 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # echo 'killing process with pid 70121' 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # kill 70121 00:19:30.855 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # wait 70121 00:19:31.113 RPC TIMEOUT SETTING TEST PASSED. 00:19:31.113 10:23:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:19:31.113 00:19:31.113 real 0m2.377s 00:19:31.113 user 0m4.562s 00:19:31.113 sys 0m0.697s 00:19:31.113 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # xtrace_disable 00:19:31.113 10:23:36 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:31.113 ************************************ 00:19:31.113 END TEST nvme_rpc_timeouts 00:19:31.113 ************************************ 00:19:31.113 10:23:36 -- spdk/autotest.sh@243 -- # uname -s 00:19:31.113 10:23:36 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:19:31.113 10:23:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:31.113 10:23:36 -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:31.113 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:19:31.113 10:23:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:19:31.113 10:23:36 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:19:31.113 10:23:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:19:31.113 10:23:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:19:31.113 10:23:36 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:19:31.113 10:23:36 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:19:31.113 10:23:36 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:19:31.113 10:23:36 -- common/autotest_common.sh@723 -- # xtrace_disable 00:19:31.113 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:19:31.113 10:23:36 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:19:31.113 10:23:36 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:19:31.113 10:23:36 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:19:31.113 10:23:36 -- common/autotest_common.sh@10 -- # set +x 00:19:31.678 setup.sh cleanup function not yet supported on FreeBSD 00:19:31.678 10:23:37 -- common/autotest_common.sh@1450 -- # return 0 00:19:31.678 10:23:37 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:19:31.678 10:23:37 -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:31.678 10:23:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.678 10:23:37 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:19:31.678 10:23:37 -- common/autotest_common.sh@729 -- # xtrace_disable 00:19:31.678 10:23:37 -- common/autotest_common.sh@10 -- # set +x 00:19:31.678 10:23:37 -- spdk/autotest.sh@387 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:31.678 10:23:37 -- spdk/autotest.sh@389 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:31.678 10:23:37 -- spdk/autotest.sh@391 -- # hash lcov 00:19:31.678 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:19:31.959 10:23:37 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.959 10:23:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:31.959 10:23:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.959 10:23:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.959 10:23:37 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:31.959 10:23:37 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:31.959 10:23:37 -- paths/export.sh@4 -- $ export PATH 00:19:31.959 10:23:37 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:31.959 10:23:37 -- common/autobuild_common.sh@436 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:19:31.959 10:23:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:19:31.959 10:23:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718015017.XXXXXX 00:19:31.959 10:23:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718015017.XXXXXX.3UukzV1B 00:19:31.959 10:23:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:19:31.959 10:23:37 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:19:31.959 10:23:37 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:19:31.959 10:23:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:31.959 10:23:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:31.959 10:23:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:19:31.959 10:23:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:19:31.959 10:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:19:31.959 10:23:37 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:19:31.959 10:23:37 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:19:31.959 10:23:37 -- pm/common@17 -- $ local monitor 00:19:31.959 10:23:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:31.959 10:23:37 -- pm/common@25 -- $ sleep 1 00:19:31.959 10:23:37 -- pm/common@21 -- $ date +%s 00:19:31.959 10:23:37 -- pm/common@21 -- $ /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /usr/home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1718015017 00:19:32.217 Redirecting to /usr/home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1718015017_collect-vmstat.pm.log 00:19:33.150 10:23:38 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:19:33.150 10:23:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:33.150 10:23:38 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:19:33.150 10:23:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:33.150 10:23:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:33.150 10:23:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:33.150 10:23:38 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:33.150 10:23:38 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:33.150 10:23:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:33.150 10:23:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:19:33.150 10:23:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:33.150 10:23:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:33.150 10:23:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:33.150 10:23:38 -- pm/common@43 -- $ [[ -e /usr/home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:33.150 10:23:38 -- pm/common@44 -- $ pid=70342 00:19:33.150 10:23:38 -- pm/common@50 -- $ kill -TERM 70342 00:19:33.150 + [[ -n 1272 ]] 00:19:33.150 + sudo kill 1272 00:19:33.161 [Pipeline] } 00:19:33.183 [Pipeline] // timeout 00:19:33.190 [Pipeline] } 00:19:33.211 [Pipeline] // stage 00:19:33.217 [Pipeline] } 00:19:33.237 [Pipeline] // catchError 00:19:33.246 [Pipeline] stage 00:19:33.249 [Pipeline] { (Stop VM) 00:19:33.264 [Pipeline] sh 00:19:33.543 + vagrant halt 00:19:37.728 ==> default: Halting domain... 00:19:55.899 [Pipeline] sh 00:19:56.177 + vagrant destroy -f 00:19:59.461 ==> default: Removing domain... 00:19:59.732 [Pipeline] sh 00:20:00.014 + mv output /var/jenkins/workspace/freebsd-vg-autotest_2/output 00:20:00.024 [Pipeline] } 00:20:00.045 [Pipeline] // stage 00:20:00.051 [Pipeline] } 00:20:00.070 [Pipeline] // dir 00:20:00.077 [Pipeline] } 00:20:00.096 [Pipeline] // wrap 00:20:00.104 [Pipeline] } 00:20:00.121 [Pipeline] // catchError 00:20:00.131 [Pipeline] stage 00:20:00.134 [Pipeline] { (Epilogue) 00:20:00.150 [Pipeline] sh 00:20:00.434 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:00.448 [Pipeline] catchError 00:20:00.451 [Pipeline] { 00:20:00.467 [Pipeline] sh 00:20:00.749 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:01.007 Artifacts sizes are good 00:20:01.018 [Pipeline] } 00:20:01.037 [Pipeline] // catchError 00:20:01.048 [Pipeline] archiveArtifacts 00:20:01.054 Archiving artifacts 00:20:01.128 [Pipeline] cleanWs 00:20:01.159 [WS-CLEANUP] Deleting project workspace... 00:20:01.159 [WS-CLEANUP] Deferred wipeout is used... 00:20:01.165 [WS-CLEANUP] done 00:20:01.167 [Pipeline] } 00:20:01.184 [Pipeline] // stage 00:20:01.190 [Pipeline] } 00:20:01.208 [Pipeline] // node 00:20:01.214 [Pipeline] End of Pipeline 00:20:01.258 Finished: SUCCESS